Search Coil vs. Fluxgate Magnetometer Measurements at Interplanetary Shocks
NASA Technical Reports Server (NTRS)
Wilson, L.B., III
2012-01-01
We present magnetic field observations at interplanetary shocks comparing two different sample rates showing significantly different results. Fluxgate magnetometer measurements show relatively laminar supercritical shock transitions at roughly 11 samples/s. Search coil magnetometer measurements at 1875 samples/s, however, show large amplitude (dB/B as large as 2) fluctuations that are not resolved by the fluxgate magnetometer. We show that these fluctuations, identified as whistler mode waves, would produce a significant perturbation to the shock transition region changing the interpretation from laminar to turbulent. Thus, previous observations of supercritical interplanetary shocks classified as laminar may have been under sampled.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hai; Zhang, Youjin, E-mail: zyj@ustc.edu.cn; Zhou, Maozhong
Highlights: • Gd(OH){sub 3} large single crystals were prepared by solid KOH assisted hydrothermal method. • The possible growth mechanism of Gd(OH){sub 3} large single crystals was proposed. • The Gd(OH){sub 3} samples emitted a strong narrow-band ultraviolet B (NB-UVB) light. • The Gd(OH){sub 3} samples showed good paramagnetic properties. - Abstract: Large single crystals of gadolinium hydroxide [Gd(OH){sub 3}] in the length of several millimeters were successfully prepared by using solid KOH assisted hydrothermal method. Gd(OH){sub 3} samples were characterized by X-ray diffraction (XRD), 4-circle single-crystal diffraction, Fourier transform infrared (FTIR) spectroscopy and X-ray photoelectron spectroscopy (XPS). FESEM imagemore » shows hexagonal prism morphology for the Gd(OH){sub 3} large crystals. The possible growth mechanism of Gd(OH){sub 3} large single crystals was proposed. The photoluminescence and magnetic properties of Gd(OH){sub 3} species were investigated.« less
NASA Astrophysics Data System (ADS)
Murasawa, Go; Yeduru, Srinivasa R.; Kohl, Manfred
2016-12-01
This study investigated macroscopic inhomogeneous deformation occurring in single-crystal Ni-Mn-Ga foils under uniaxial tensile loading. Two types of single-crystal Ni-Mn-Ga foil samples were examined as-received and after thermo-mechanical training. Local strain and the strain field were measured under tensile loading using laser speckle and digital image correlation. The as-received sample showed a strongly inhomogeneous strain field with intermittence under progressive deformation, but the trained sample result showed strain field homogeneity throughout the specimen surface. The as-received sample is a mainly polycrystalline-like state composed of the domain structure. The sample contains many domain boundaries and large domain structures in the body. Its structure would cause large local strain band nucleation with intermittence. However, the trained one is an ideal single-crystalline state with a transformation preferential orientation of variants after almost all domain boundary and large domain structures vanish during thermo-mechanical training. As a result, macroscopic homogeneous deformation occurs on the trained sample surface during deformation.
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Sampling large random knots in a confined space
NASA Astrophysics Data System (ADS)
Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.
2007-09-01
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.
Whale sharks target dense prey patches of sergestid shrimp off Tanzania
Rohner, Christoph A.; Armstrong, Amelia J.; Pierce, Simon J.; Prebble, Clare E. M.; Cagua, E. Fernando; Cochran, Jesse E. M.; Berumen, Michael L.; Richardson, Anthony J.
2015-01-01
Large planktivores require high-density prey patches to make feeding energetically viable. This is a major challenge for species living in tropical and subtropical seas, such as whale sharks Rhincodon typus. Here, we characterize zooplankton biomass, size structure and taxonomic composition from whale shark feeding events and background samples at Mafia Island, Tanzania. The majority of whale sharks were feeding (73%, 380 of 524 observations), with the most common behaviour being active surface feeding (87%). We used 20 samples collected from immediately adjacent to feeding sharks and an additional 202 background samples for comparison to show that plankton biomass was ∼10 times higher in patches where whale sharks were feeding (25 vs. 2.6 mg m−3). Taxonomic analyses of samples showed that the large sergestid Lucifer hanseni (∼10 mm) dominated while sharks were feeding, accounting for ∼50% of identified items, while copepods (<2 mm) dominated background samples. The size structure was skewed towards larger animals representative of L.hanseni in feeding samples. Thus, whale sharks at Mafia Island target patches of dense, large, zooplankton dominated by sergestids. Large planktivores, such as whale sharks, which generally inhabit warm oligotrophic waters, aggregate in areas where they can feed on dense prey to obtain sufficient energy. PMID:25814777
Sampling errors in the estimation of empirical orthogonal functions. [for climatology studies
NASA Technical Reports Server (NTRS)
North, G. R.; Bell, T. L.; Cahalan, R. F.; Moeng, F. J.
1982-01-01
Empirical Orthogonal Functions (EOF's), eigenvectors of the spatial cross-covariance matrix of a meteorological field, are reviewed with special attention given to the necessary weighting factors for gridded data and the sampling errors incurred when too small a sample is available. The geographical shape of an EOF shows large intersample variability when its associated eigenvalue is 'close' to a neighboring one. A rule of thumb indicating when an EOF is likely to be subject to large sampling fluctuations is presented. An explicit example, based on the statistics of the 500 mb geopotential height field, displays large intersample variability in the EOF's for sample sizes of a few hundred independent realizations, a size seldom exceeded by meteorological data sets.
Otero, Jorge; Guerrero, Hector; Gonzalez, Laura; Puig-Vidal, Manel
2012-01-01
The time required to image large samples is an important limiting factor in SPM-based systems. In multiprobe setups, especially when working with biological samples, this drawback can make impossible to conduct certain experiments. In this work, we present a feedfordward controller based on bang-bang and adaptive controls. The controls are based in the difference between the maximum speeds that can be used for imaging depending on the flatness of the sample zone. Topographic images of Escherichia coli bacteria samples were acquired using the implemented controllers. Results show that to go faster in the flat zones, rather than using a constant scanning speed for the whole image, speeds up the imaging process of large samples by up to a 4× factor. PMID:22368491
Intratumoral histologic heterogeneity of gliomas. A quantitative study.
Paulus, W; Peiffer, J
1989-07-15
Quantitative data for intratumoral histologic heterogeneity were obtained by investigating ten small and ten large punched samples from 50 unembedded supratentorial gliomas. The 1000 samples were diagnosed according to the World Health Organization (WHO) classification and six histopathologic features associated with malignancy were evaluated (cellular density, nuclear pleomorphism, necroses, histologic architecture, vessels, and mitoses), each with defined gradations. The slides were read independently by two observers. The initially high interobserver variability (grade, 22.2%; type, 10.3%; and tumor presence/absence, 7.1%) was for the most part due to intermediate grades and types and was reduced to 1.7% after mutual review. Small samples showed lower mean grade than large samples and more often absence of tumor (7.6% versus 2.4%). Of all gliomas, 48% showed differently typed samples, 82% differently graded samples, and 62% benign and malignant grades. Intratumoral heterogeneity was higher for the necroses than for the other histopathologic features. Our results underscore the importance of extensive tissue sampling.
Multi-level structure in the large scale distribution of optically luminous galaxies
NASA Astrophysics Data System (ADS)
Deng, Xin-fa; Deng, Zu-gan; Liu, Yong-zhen
1992-04-01
Fractal dimensions in the large scale distribution of galaxies have been calculated with the method given by Wen et al. [1] Samples are taken from CfA redshift survey in northern and southern galactic [2] hemisphere in our analysis respectively. Results from these two regions are compared with each other. There are significant differences between the distributions in these two regions. However, our analyses do show some common features of the distributions in these two regions. All subsamples show multi-level fractal character distinctly. Combining it with the results from analyses of samples given by IRAS galaxies and results from samples given by redshift survey in pencil-beam fields, [3,4] we suggest that multi-level fractal structure is most likely to be a general and important character in the large scale distribution of galaxies. The possible implications of this character are discussed.
A novel computational approach towards the certification of large-scale boson sampling
NASA Astrophysics Data System (ADS)
Huh, Joonsuk
Recent proposals of boson sampling and the corresponding experiments exhibit the possible disproof of extended Church-Turning Thesis. Furthermore, the application of boson sampling to molecular computation has been suggested theoretically. Till now, however, only small-scale experiments with a few photons have been successfully performed. The boson sampling experiments of 20-30 photons are expected to reveal the computational superiority of the quantum device. A novel theoretical proposal for the large-scale boson sampling using microwave photons is highly promising due to the deterministic photon sources and the scalability. Therefore, the certification protocol of large-scale boson sampling experiments should be presented to complete the exciting story. We propose, in this presentation, a computational protocol towards the certification of large-scale boson sampling. The correlations of paired photon modes and the time-dependent characteristic functional with its Fourier component can show the fingerprint of large-scale boson sampling. This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(NRF-2015R1A6A3A04059773), the ICT R&D program of MSIP/IITP [2015-019, Fundamental Research Toward Secure Quantum Communication] and Mueunjae Institute for Chemistry (MIC) postdoctoral fellowship.
Measuring discharge with ADCPs: Inferences from synthetic velocity profiles
Rehmann, C.R.; Mueller, D.S.; Oberg, K.A.
2009-01-01
Synthetic velocity profiles are used to determine guidelines for sampling discharge with acoustic Doppler current profilers (ADCPs). The analysis allows the effects of instrument characteristics, sampling parameters, and properties of the flow to be studied systematically. For mid-section measurements, the averaging time required for a single profile measurement always exceeded the 40 s usually recommended for velocity measurements, and it increased with increasing sample interval and increasing time scale of the large eddies. Similarly, simulations of transect measurements show that discharge error decreases as the number of large eddies sampled increases. The simulations allow sampling criteria that account for the physics of the flow to be developed. ?? 2009 ASCE.
Ultra-broadband ptychography with self-consistent coherence estimation from a high harmonic source
NASA Astrophysics Data System (ADS)
Odstrčil, M.; Baksh, P.; Kim, H.; Boden, S. A.; Brocklesby, W. S.; Frey, J. G.
2015-09-01
With the aim of improving imaging using table-top extreme ultraviolet sources, we demonstrate coherent diffraction imaging (CDI) with relative bandwidth of 20%. The coherence properties of the illumination probe are identified using the same imaging setup. The presented methods allows for the use of fewer monochromating optics, obtaining higher flux at the sample and thus reach higher resolution or shorter exposure time. This is important in the case of ptychography when a large number of diffraction patterns need to be collected. Our microscopy setup was tested on a reconstruction of an extended sample to show the quality of the reconstruction. We show that high harmonic generation based EUV tabletop microscope can provide reconstruction of samples with a large field of view and high resolution without additional prior knowledge about the sample or illumination.
Importance sampling large deviations in nonequilibrium steady states. I.
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T
2018-03-28
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Importance sampling large deviations in nonequilibrium steady states. I
NASA Astrophysics Data System (ADS)
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.
2018-03-01
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
NASA Astrophysics Data System (ADS)
Curry, A. C.; Caricchi, L.; Lipman, P. W.
2017-12-01
A primary goal of volcanology is to understand the frequency and magnitude of large, explosive volcanic eruptions to mitigate their impact on society. Recent studies show that the average magma flux and the time between magma injections into a given magmatic-volcanic system fundamentally control the frequency and magnitude of volcanic eruptions, yet these parameters are unknown for many volcanic regions on Earth. We focus on major and trace element chemistry of individual phases and whole-rock samples, initial zircon ID-TIMS analyses, and zircon SIMS oxygen isotope analyses of four caldera-forming ignimbrites from the San Juan caldera cluster in the Southern Rocky Mountain volcanic field, Colorado, to determine the physical and chemical processes leading to large eruptions. We collected outflow samples along stratigraphy of the three caldera-forming ignimbrites of the San Luis caldera complex: the Rat Creek Tuff ( 150 km3), Cebolla Creek Tuff ( 250 km3), and Nelson Mountain Tuff (>500 km3); and we collected samples of both outflow and intracaldera facies of the Snowshoe Mountain Tuff (>500 km3), which formed the Creede caldera. Single-crystal sanidine 40Ar/39Ar ages show that these large eruptions occurred in rapid succession between 26.91 ± 0.02 Ma (Rat Creek Tuff) and 26.87 ± 0.02 Ma (Snowshoe Mountain Tuff), providing an opportunity to investigate the temporal evolution of magmatic systems feeding large, explosive volcanic eruptions. Major and trace element analyses show that the first and last eruption of the San Luis caldera complex (Rat Creek Tuff and Nelson Mountain Tuff) are rhyolitic to dacitic ignimbrites, whereas the Cebolla Creek Tuff and Snowshoe Mountain Tuff are crystal-rich, dacitic ignimbrites. Trace elements show enrichment in light rare-earth elements (LREEs) over heavy rare-earth elements (HREEs), and whereas the trace element patterns are similar for each caldera cycle, trace element values for each ignimbrite show variability in HREE concentrations. This variability indicates that these large eruptions sampled a magmatic system with some degree of internal heterogeneity. These results have implications for the chemical and physical processes, such as magmatic flux and injection periodicity, leading to the formation of large magmatic systems prior to large, explosive eruptions.
NASA Technical Reports Server (NTRS)
Fahey, A. J.; Goswami, J. N.; Mckeegan, K. D.; Zinner, E. K.
1987-01-01
Ion probe measurements of the oxygen isotopic composition of seven hibonite samples from the CM chondrites Murchison and Murray are reported. All samples show large O-16 excesses relative to terrestrial oxygen. The data for all samples plot along the carbonaceous chondrite O-16-rich mixing line and show no evidence for isotopic mass fractionation effects characteristic of FUN inclusions. These hibonites have the largest Ca-48 and Ti-50 isotopic anomalies found to date; thus there is no intrinsic relationship between anomalies of a nucleosynthetic origin and isotopic mass fractionation effects. The large O-16 excess seen in the sample with the largest measured Ca-48 and Ti-50 depletions argues against a late injection of exotic material from a nearby supernova as a source for the isotopic anomalies.
Tracing the trajectory of skill learning with a very large sample of online game players.
Stafford, Tom; Dewar, Michael
2014-02-01
In the present study, we analyzed data from a very large sample (N = 854,064) of players of an online game involving rapid perception, decision making, and motor responding. Use of game data allowed us to connect, for the first time, rich details of training history with measures of performance from participants engaged for a sustained amount of time in effortful practice. We showed that lawful relations exist between practice amount and subsequent performance, and between practice spacing and subsequent performance. Our methodology allowed an in situ confirmation of results long established in the experimental literature on skill acquisition. Additionally, we showed that greater initial variation in performance is linked to higher subsequent performance, a result we link to the exploration/exploitation trade-off from the computational framework of reinforcement learning. We discuss the benefits and opportunities of behavioral data sets with very large sample sizes and suggest that this approach could be particularly fecund for studies of skill acquisition.
Compression Strength of Sulfur Concrete Subjected to Extreme Cold
NASA Technical Reports Server (NTRS)
Grugel, Richard N.
2008-01-01
Sulfur concrete cubes were cycled between liquid nitrogen and room temperature to simulate extreme exposure conditions. Subsequent compression testing showed the strength of cycled samples to be roughly five times less than those non-cycled. Fracture surface examination showed de-bonding of the sulfur from the aggregate material in the cycled samples but not in those non-cycled. The large discrepancy found, between the samples is attributed to the relative thermal properties of the materials constituting the concrete.
Comparative AMS radiocarbon dating of pretreated versus non-pretreated tropical wood samples
NASA Astrophysics Data System (ADS)
Patrut, Adrian; von Reden, Karl F.; Lowy, Daniel A.; Mayne, Diana H.; Elder, Kathryn E.; Roberts, Mark L.; McNichol, Ann P.
2010-04-01
Several wood samples collected from Dorslandboom, a large iconic African baobab ( Adansonia digitata L.) from Namibia, were investigated by AMS radiocarbon dating subsequent to pretreatment and, alternatively, without pretreatment. The comparative statistical evaluation of results showed that there were no significant differences between fraction modern values and radiocarbon dates of the samples analyzed after pretreatment and without pretreatment, respectively. The radiocarbon date of the oldest sample was 993 ± 20 BP. Dating results also revealed that Dorslandboom is a multi-generation tree, with several stems showing different ages.
A Computational Approach to Qualitative Analysis in Large Textual Datasets
Evans, Michael S.
2014-01-01
In this paper I introduce computational techniques to extend qualitative analysis into the study of large textual datasets. I demonstrate these techniques by using probabilistic topic modeling to analyze a broad sample of 14,952 documents published in major American newspapers from 1980 through 2012. I show how computational data mining techniques can identify and evaluate the significance of qualitatively distinct subjects of discussion across a wide range of public discourse. I also show how examining large textual datasets with computational methods can overcome methodological limitations of conventional qualitative methods, such as how to measure the impact of particular cases on broader discourse, how to validate substantive inferences from small samples of textual data, and how to determine if identified cases are part of a consistent temporal pattern. PMID:24498398
How well do we know the infaunal biomass of the continental shelf?
NASA Astrophysics Data System (ADS)
Powell, Eric N.; Mann, Roger
2016-03-01
Benthic infauna comprise a wide range of taxa of varying abundances and sizes, but large infaunal taxa are infrequently recorded in community surveys of the shelf benthos. These larger, but numerically rare, species may contribute disproportionately to biomass, however. We examine the degree to which standard benthic sampling gear and survey design provide an adequate estimate of the biomass of large infauna using the Atlantic surfclam, Spisula solidissima, on the continental shelf off the northeastern coast of the United States as a test organism. We develop a numerical model that simulates standard survey designs, gear types, and sampling densities to evaluate the effectiveness of vertically-dropped sampling gear (e.g., boxcores, grabs) for estimating density of large species. Simulations of randomly distributed clams at a density of 0.5-1 m-2 within an 0.25-km2 domain show that lower sampling densities (1-5 samples per sampling event) resulted in highly inaccurate estimates of clam density with the presence of clams detected in less than 25% of the sampling events. In all cases in which patchiness was present in the simulated clam population, surveys were prone to very large errors (survey availability events) unless a dense (e.g., 100-sample) sampling protocol was imposed. Thus, commercial quantities of surfclams could easily go completely undetected by any standard benthic community survey protocol using vertically-dropped gear. Without recourse to modern high-volume sampling gear capable of sampling many meters at a swath, such as hydraulic dredges, biomass of the continental shelf will be grievously underestimated if large infauna are present even at moderate densities.
Anomalies in Trace Metal and Rare-Earth Loads below a Waste-Water Treatment Plant
NASA Astrophysics Data System (ADS)
Antweiler, R.; Writer, J. H.; Murphy, S.
2013-12-01
The changes in chemical loads were examined for 54 inorganic elements and compounds in a 5.4-km reach of Boulder Creek, Colorado downstream of a waste water treatment plant (WWTP) outfall. Elements were partitioned into three categories: those showing a decrease in loading downstream, those showing an increase, and those which were conservative, at least over the length of the study reach. Dissolved loads which declined - generally indicative of in-stream loss via precipitation or sorption - were typically rapid (occurring largely before the first sampling site, 2.3 km downstream); elements showing this behavior were Bi, Cr, Cs, Ga, Ge, Hg, Se and Sn. These results were as expected before the experiment was performed. However, a large group (28 elements, including all the rare-earth elements, REE, except Gd) exhibited dissolved load increases indicating in-stream gains. These gains may be due to particulate matter dissolving or disaggregating, or that desorption is occurring below the WWTP. As with the in-stream loss group, the processes tended to be rapid, typically occurring before the first sampling site. Whole-water samples collected concurrently also had a large group of elements which showed an increase in load downstream of the WWTP. Among these were most of the group which had increases in the dissolved load, including all the REE (except Gd). Because whole-water samples include both dissolved and suspended particulates within them, increases in loads cannot be accounted for by invoking desorption or disaggregation mechanisms; thus, the only source for these increases is from the bed load of the stream. Further, the difference between the whole-water and dissolved loads is a measure of the particulate load, and calculations show that not only did the dissolved and whole-water loads increase, but so did the particulate loads. This implies that at the time of sampling the bed sediment was supplying a significant contribution to the suspended load. In general, it seems untenable as a hypothesis to suppose that the stream bed material can permanently supply the source of the in-stream load increases of a large group of inorganic elements. We propose that the anomalous increase in loads was more a function of the time of sampling (both diurnally and seasonally) and that sampling at different times of day or different seasons during the year would give contradictory results to those seen here. If this is so, inorganic loading studies must include multiple sampling both over the course of a day and during different seasons and flow regimes.
Chemical Characterization of an Envelope B/D Sample from Hanford Tank 241-AZ-102
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hay, M.S.
2000-08-23
A sample from Hanford waste tank 241-AZ-102 was received at the Savannah River Technology Center (SRTC) and chemically characterized. The sample containing supernate and a small amount of sludge solids was analyzed as-received. The filtered supernatant liquid, the total dried solids of the sample, and the washed insoluble solids obtained from filtration of the sample were analyzed. A mass balance calculation of the three fractions of the sample analyzed indicate the analytical results appear relatively self-consistent for major components of the sample. However, some inconsistency was observed between results were more than one method of determination was employed and formore » species present in low concentrations. The actinides isotopes, plutonium, americium, and curium, present analytical challenges due to the low concentration of these species and the potential for introduction of small amounts of contamination during sampling handling resulting in large uncertainties. A direct comparison to previous analyses of material from tank 241-AZ-102 showed good agreement with the filtered supernatant liquid. However, the comparison of solids data showed poor agreement. The poor agreement shown between the current results for the solids samples and previous analyses most likely results from the uncertainties associated with obtaining small solids samples from a large non-homogenized waste tank.« less
Chemical composition of snow in the northern Sierra Nevada and other areas
Feth, John Henry Frederick; Rogers, S.M.; Roberson, Charles Elmer
1964-01-01
Melting snow provides a large part of the water used throughout the western conterminous United States for agriculture, industry, and domestic supply. It is an active agent in chemical weathering, supplies moisture for forest growth, and sustains fish and wildlife. Despite its importance, virtually nothing has been known of the chemical character of snow in the western mountains until the present study.Analysis of more than 100 samples, most from the northern Sierra Nevada, but some from Utah, Denver, Colo., and scattered points, shows that melted snow is a dilute solution containing measurable amounts of some or all of the inorganic constituents commonly found in natural water. There are significant regional differences in chemical composition; the progressive increase in calcium content with increasing distance eastward from the west slope of the Sierra Nevada is the most pronounced. The chemical character of individual snowfalls is variable. Some show predominant influence of oceanic salt; others show strong effects of mineralization from continental sources, probably largely dust. Silica and boron were found in about half the samples analyzed for these constituents; precipitation is seldom analyzed for these substances.Results of the chemical analyses for major constituents in snow samples are summarized in the following table. The median and mean values for individual constituents are derived from 41-78 samples of Sierra Nevada snow, 6-18 samples of Utah snow, and 6-17 samples of Denver, Colo., snow.The sodium, chloride, and perhaps boron found in snow are probably incorporated in moisture-laden air masses as they move over the Pacific Ocean. Silica, although abundant in the silicate-mineral nuclei found in some snowflakes, may be derived in soluble form largely from dust. Calcium, magnesium, and some bicarbonate are probably added by dust of continental origin. The sources of the other constituents remain unknown.When snowmelt comes in contact with the lithosphere, the earlier diversity of chemical type largely disappears. The melt water rapidly increases its content of dissolved solids and becomes calcium magnesium bicarbonate in type. Silica, whose concentration increases more than tenfold, shows the largest gain; calcium and bicarbonate contents also increase markedly. Most of the additional mineral matter is from soft and weathered rock; bicarbonate, however, is largely from the soil atmosphere.Investigators, some reporting as much as a century ago, concentrated attention largely on nitrogen compounds and seldom reported other constituents except chloride and sulfate. The Northern European precipitation-sampling network provides the most comprehensive collection of data on precipitation chemistry, but it does not segregate snow from other forms of precipitation. The present study establishes with confidence the chemical character of snow in the Sierra Nevada, and suggests that the dissolved-solids content of precipitation increases with increasing distance inland from the Pacific Coast.
On spatial coalescents with multiple mergers in two dimensions.
Heuer, Benjamin; Sturm, Anja
2013-08-01
We consider the genealogy of a sample of individuals taken from a spatially structured population when the variance of the offspring distribution is relatively large. The space is structured into discrete sites of a graph G. If the population size at each site is large, spatial coalescents with multiple mergers, so called spatial Λ-coalescents, for which ancestral lines migrate in space and coalesce according to some Λ-coalescent mechanism, are shown to be appropriate approximations to the genealogy of a sample of individuals. We then consider as the graph G the two dimensional torus with side length 2L+1 and show that as L tends to infinity, and time is rescaled appropriately, the partition structure of spatial Λ-coalescents of individuals sampled far enough apart converges to the partition structure of a non-spatial Kingman coalescent. From a biological point of view this means that in certain circumstances both the spatial structure as well as larger variances of the underlying offspring distribution are harder to detect from the sample. However, supplemental simulations show that for moderately large L the different structure is still evident. Copyright © 2012 Elsevier Inc. All rights reserved.
Coalescence computations for large samples drawn from populations of time-varying sizes
Polanski, Andrzej; Szczesna, Agnieszka; Garbulowski, Mateusz; Kimmel, Marek
2017-01-01
We present new results concerning probability distributions of times in the coalescence tree and expected allele frequencies for coalescent with large sample size. The obtained results are based on computational methodologies, which involve combining coalescence time scale changes with techniques of integral transformations and using analytical formulae for infinite products. We show applications of the proposed methodologies for computing probability distributions of times in the coalescence tree and their limits, for evaluation of accuracy of approximate expressions for times in the coalescence tree and expected allele frequencies, and for analysis of large human mitochondrial DNA dataset. PMID:28170404
Su, Xiaoquan; Xu, Jian; Ning, Kang
2012-10-01
It has long been intriguing scientists to effectively compare different microbial communities (also referred as 'metagenomic samples' here) in a large scale: given a set of unknown samples, find similar metagenomic samples from a large repository and examine how similar these samples are. With the current metagenomic samples accumulated, it is possible to build a database of metagenomic samples of interests. Any metagenomic samples could then be searched against this database to find the most similar metagenomic sample(s). However, on one hand, current databases with a large number of metagenomic samples mostly serve as data repositories that offer few functionalities for analysis; and on the other hand, methods to measure the similarity of metagenomic data work well only for small set of samples by pairwise comparison. It is not yet clear, how to efficiently search for metagenomic samples against a large metagenomic database. In this study, we have proposed a novel method, Meta-Storms, that could systematically and efficiently organize and search metagenomic data. It includes the following components: (i) creating a database of metagenomic samples based on their taxonomical annotations, (ii) efficient indexing of samples in the database based on a hierarchical taxonomy indexing strategy, (iii) searching for a metagenomic sample against the database by a fast scoring function based on quantitative phylogeny and (iv) managing database by index export, index import, data insertion, data deletion and database merging. We have collected more than 1300 metagenomic data from the public domain and in-house facilities, and tested the Meta-Storms method on these datasets. Our experimental results show that Meta-Storms is capable of database creation and effective searching for a large number of metagenomic samples, and it could achieve similar accuracies compared with the current popular significance testing-based methods. Meta-Storms method would serve as a suitable database management and search system to quickly identify similar metagenomic samples from a large pool of samples. ningkang@qibebt.ac.cn Supplementary data are available at Bioinformatics online.
Flexible sampling large-scale social networks by self-adjustable random walk
NASA Astrophysics Data System (ADS)
Xu, Xiao-Ke; Zhu, Jonathan J. H.
2016-12-01
Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.
Kirk, Michelle R.; Jonker, Arjan; McCulloch, Alan
2015-01-01
Analysis of rumen microbial community structure based on small-subunit rRNA marker genes in metagenomic DNA samples provides important insights into the dominant taxa present in the rumen and allows assessment of community differences between individuals or in response to treatments applied to ruminants. However, natural animal-to-animal variation in rumen microbial community composition can limit the power of a study considerably, especially when only subtle differences are expected between treatment groups. Thus, trials with large numbers of animals may be necessary to overcome this variation. Because ruminants pass large amounts of rumen material to their oral cavities when they chew their cud, oral samples may contain good representations of the rumen microbiota and be useful in lieu of rumen samples to study rumen microbial communities. We compared bacterial, archaeal, and eukaryotic community structures in DNAs extracted from buccal swabs to those in DNAs from samples collected directly from the rumen by use of a stomach tube for sheep on four different diets. After bioinformatic depletion of potential oral taxa from libraries of samples collected via buccal swabs, bacterial communities showed significant clustering by diet (R = 0.37; analysis of similarity [ANOSIM]) rather than by sampling method (R = 0.07). Archaeal, ciliate protozoal, and anaerobic fungal communities also showed significant clustering by diet rather than by sampling method, even without adjustment for potentially orally associated microorganisms. These findings indicate that buccal swabs may in future allow quick and noninvasive sampling for analysis of rumen microbial communities in large numbers of ruminants. PMID:26276109
Shen, You-xin; Liu, Wei-li; Li, Yu-hui; Guan, Hui-lin
2014-01-01
A large number of small-sized samples invariably shows that woody species are absent from forest soil seed banks, leading to a large discrepancy with the seedling bank on the forest floor. We ask: 1) Does this conventional sampling strategy limit the detection of seeds of woody species? 2) Are large sample areas and sample sizes needed for higher recovery of seeds of woody species? We collected 100 samples that were 10 cm (length) × 10 cm (width) × 10 cm (depth), referred to as larger number of small-sized samples (LNSS) in a 1 ha forest plot, and placed them to germinate in a greenhouse, and collected 30 samples that were 1 m × 1 m × 10 cm, referred to as small number of large-sized samples (SNLS) and placed them (10 each) in a nearby secondary forest, shrub land and grass land. Only 15.7% of woody plant species of the forest stand were detected by the 100 LNSS, contrasting with 22.9%, 37.3% and 20.5% woody plant species being detected by SNLS in the secondary forest, shrub land and grassland, respectively. The increased number of species vs. sampled areas confirmed power-law relationships for forest stand, the LNSS and SNLS at all three recipient sites. Our results, although based on one forest, indicate that conventional LNSS did not yield a high percentage of detection for woody species, but SNLS strategy yielded a higher percentage of detection for woody species in the seed bank if samples were exposed to a better field germination environment. A 4 m2 minimum sample area derived from power equations is larger than the sampled area in most studies in the literature. Increased sample size also is needed to obtain an increased sample area if the number of samples is to remain relatively low.
Retrieving cosmological signal using cosmic flows
NASA Astrophysics Data System (ADS)
Bouillot, V.; Alimi, J.-M.
2011-12-01
To understand the origin of the anomalously high bulk flow at large scales, we use very large simulations in various cosmological models. To disentangle between cosmological and environmental effects, we select samples with bulk flow profiles similar to the observational data Watkins et al. (2009) which exhibit a maximum in the bulk flow at 53 h^{-1} Mpc. The estimation of the cosmological parameters Ω_M and σ_8, done on those samples, is correct from the rms mass fluctuation whereas this estimation gives completely false values when done on bulk flow measurements, hence showing a dependance of velocity fields on larger scales. By drawing a clear link between velocity fields at 53 h^{-1} Mpc and asymmetric patterns of the density field at 85 h^{-1} Mpc, we show that the bulk flow can depend largely on the environment. The retrieving of the cosmological signal is achieved by studying the convergence of the bulk flow towards the linear prediction at very large scale (˜ 150 h^{-1} Mpc).
A magneto-resistance and magnetisation study of TaAs2 semimetal
NASA Astrophysics Data System (ADS)
Harimohan, V.; Bharathi, A.; Rajaraman, R.; Sundar, C. S.
2018-04-01
Here we report on the magneto-transport and magnetization studies on single crystalline samples of TaAs2. The resistivity versus temperature of the single crystalline sample shows a metallic behavior with a large residual resistivity ratio. The TaAs2 crystal shows large magneto resistance at low temperature, reaching 91000% at 2.5K in a field of 15 T and the resistivity versus temperature shows an upturn at low temperature, when measured with increase in magnetic field. Resistivity and magnetization measurements as a function of magnetic field show characteristic Shubnikov de Haas and de Hass van Alphen oscillations, displaying anisotropy with respect to the crystalline direction. The effective mass and Dingle temperature were estimated from the analysis of the oscillation amplitude as a function of temperature and magnetic field. Negative magneto-resistance was not observed with current parallel to the magnetic field direction, suggesting that TaAs2 is not an archetypical Weyl metal.
Razaq, Aamir; Mihranyan, Albert; Welch, Ken; Nyholm, Leif; Strømme, Maria
2009-01-15
The electrochemically controlled anion absorption properties of a novel large surface area composite paper material composed of polypyrrole (PPy) and cellulose derived from Cladophora sp. algae, synthesized with two oxidizing agents, iron(III) chloride and phosphomolybdic acid (PMo), were analyzed in four different electrolytes containing anions (i.e., chloride, aspartate, glutamate, and p-toluenesulfonate) of varying size.The composites were characterized with scanning and transmission electron microscopy, N2 gas adsorption,and conductivity measurements. The potential-controlled ion exchange properties of the materials were studied by cyclic voltammetry and chronoamperometry at varying potentials. The surface area and conductivity of the iron(III) chloride synthesized sample were 58.8 m2/g and 0.65 S/cm, respectively, while the corresponding values for the PMo synthesized sample were 31.3 m2/g and 0.12 S/cm. The number of absorbed ions per sample mass was found to be larger for the iron(III) chloride synthesized sample than for the PMo synthesized one in all four electrolytes. Although the largest extraction yields were obtained in the presence of the smallest anion (i.e., chloride) for both samples, the relative degree of extraction for the largest ions (i.e., glutamate and p-toluenesulfonate) was higher for the PMo sample. This clearly shows that it is possible to increase the extraction yield of large anions by carrying out the PPy polymerization in the presence of large anions. The results likewise show that high ion exchange capacities, as well as extraction and desorption rates, can be obtained for large anions with high surface area composites coated with relatively thin layers of PPy.
NASA Astrophysics Data System (ADS)
Eom, Young-Ho; Jo, Hang-Hyun
2015-05-01
Many complex networks in natural and social phenomena have often been characterized by heavy-tailed degree distributions. However, due to rapidly growing size of network data and concerns on privacy issues about using these data, it becomes more difficult to analyze complete data sets. Thus, it is crucial to devise effective and efficient estimation methods for heavy tails of degree distributions in large-scale networks only using local information of a small fraction of sampled nodes. Here we propose a tail-scope method based on local observational bias of the friendship paradox. We show that the tail-scope method outperforms the uniform node sampling for estimating heavy tails of degree distributions, while the opposite tendency is observed in the range of small degrees. In order to take advantages of both sampling methods, we devise the hybrid method that successfully recovers the whole range of degree distributions. Our tail-scope method shows how structural heterogeneities of large-scale complex networks can be used to effectively reveal the network structure only with limited local information.
Domagalski, Joseph L.
1999-01-01
Mercury poses a water-quality problem for California's Sacramento River, a large river with a mean annual discharge of over 650 m3/s. This river discharges into the San Francisco Bay, and numerous fish species of the bay and river contain mercury levels high enough to affect human health if consumed. Two possible sources of mercury are the mercury mines in the Coast Ranges and the gold mines in the Sierra Nevada. Mercury was once mined in the Coast Ranges, west of the Sacramento River, and used to process gold in the Sierra Nevada, east of the river. The mineralogy of the Coast Ranges mercury deposits is mainly cinnabar (HgS), but elemental mercury was used to process gold in the Sierra Nevada. Residual mercury from mineral processing in the Sierra Nevada is mainly in elemental form or in association with oxide particles or organic matter and is biologically available. Recent bed-sediment sampling, at sites below large reservoirs, showed elevated levels of total mercury (median concentration 0.28 ??g/g) in every large river (the Feather, Yuba, Bear, and American rivers) draining the Sierra Nevada gold region. Monthly sampling for mercury in unfiltered water shows relatively low concentrations during the nonrainy season in samples collected throughout the Sacramento River Basin, but significantly higher concentrations following storm-water runoff. Measured concentrations, following storm-water runoff, frequently exceeded the state of California standards for the protection of aquatic life. Results from the first year of a 2-year program of sampling for methyl mercury in unfiltered water showed similar median concentrations (0.1 ng/l) at all sampling locations, but with apparent high seasonal concentrations measured during autumn and winter. Methyl mercury concentrations were not significantly higher in rice field runoff water, even though rice production involves the creation of seasonal wetlands: higher rates of methylation are known to occur in stagnant wetland environments that have high dissolved carbon.Mercury poses a water-quality problem for California's Sacramento River, a large river with a mean annual discharge of over 650 m3/s. This river discharges into the San Francisco Bay, and numerous fish species of the bay and river contain mercury levels high enough to affect human health if consumed. Two possible sources of mercury are the mercury mines in the Coast Ranges and the gold mines in the Sierra Nevada. Mercury was once mined in the Coast Ranges, west of the Sacramento River, and used to process gold in the Sierra Nevada east of the river. The mineralogy of the Coast Ranges mercury deposits is mainly cinnabar (HgS), but elemental mercury was used to process gold in the Sierra Nevada. Residual mercury from mineral processing in the Sierra Nevada is mainly in elemental form or in association with oxide particles or organic matter and is biologically available. Recent bed-sediment sampling, at sites below large reservoirs, showed elevated levels of total mercury (median concentration 0.28 ??g/g) in every large river (the Feather, Yuba, Bear, and American rivers) draining the Sierra Nevada gold region. Monthly sampling for mercury in unfiltered water shows relatively low concentrations during the nonrainy season in samples collected throughout the Sacramento River Basin, but significantly higher concentrations following storm-water runoff. Measured concentrations, following storm-water runoff, frequently exceeded the state of California standards for the protection of aquatic life. Results from the first year of a 2-year program of sampling for methyl mercury in unfiltered water showed similar median concentrations (0.1 ng/l) at all sampling locations, but with apparent high seasonal concentrations measured during autumn and winter. Methyl mercury concentrations were not significantly higher in rice field runoff water, even though rice production involves the creation of seasonal wetlands: higher rates of methylation a
Validity of Bioelectrical Impedance Analysis to Estimation Fat-Free Mass in the Army Cadets.
Langer, Raquel D; Borges, Juliano H; Pascoa, Mauro A; Cirolini, Vagner X; Guerra-Júnior, Gil; Gonçalves, Ezequiel M
2016-03-11
Bioelectrical Impedance Analysis (BIA) is a fast, practical, non-invasive, and frequently used method for fat-free mass (FFM) estimation. The aims of this study were to validate predictive equations of BIA to FFM estimation in Army cadets and to develop and validate a specific BIA equation for this population. A total of 396 males, Brazilian Army cadets, aged 17-24 years were included. The study used eight published predictive BIA equations, a specific equation in FFM estimation, and dual-energy X-ray absorptiometry (DXA) as a reference method. Student's t-test (for paired sample), linear regression analysis, and Bland-Altman method were used to test the validity of the BIA equations. Predictive BIA equations showed significant differences in FFM compared to DXA (p < 0.05) and large limits of agreement by Bland-Altman. Predictive BIA equations explained 68% to 88% of FFM variance. Specific BIA equations showed no significant differences in FFM, compared to DXA values. Published BIA predictive equations showed poor accuracy in this sample. The specific BIA equations, developed in this study, demonstrated validity for this sample, although should be used with caution in samples with a large range of FFM.
NASA Astrophysics Data System (ADS)
Li, Bao-Ping; Zhao, Jian-Xin; Greig, Alan; Collerson, Kenneth D.; Zhuo, Zhen-Xi; Feng, Yue-Xin
2005-11-01
We compare the trace element and Sr isotopic compositions of stoneware bodies made in Yaozhou and Jizhou to characterise these Chinese archaeological ceramics and examine the potential of Sr isotopes in provenance studies. Element concentrations determined by ICP-MS achieve distinct characterisation for Jizhou samples due to their restricted variation, yet had limited success with Yaozhou wares because of their large variability. In contrast, 87Sr/86Sr ratios in Yaozhou samples have a very small variation and are all significantly lower than those of Jizhou samples, which show a large variation and cannot be well characterised with Sr isotopes. Geochemical interpretation reveals that 87Sr/86Sr ratios will have greater potential to characterise ceramics made of low Rb/Sr materials such as kaolin clay, yet will show larger variations in ceramics made of high Rb/Sr materials such as porcelain stone.
Tornes, L.H.; Brigham, M.E.
1994-01-01
A relatively large fraction of stream samples had detectable quantities of 2,4-D, a- and y-HCH, and atrazine. These samples covered time spans of as much as 15 years and were from sites downstream from large drainage basins; however, concentrations were well below US EPA MCLs. One county-level study showed higher 2,4-D concentrations at upstream sites than at the outlet from a small basin. This indicates that downstream sites may fail to show impaired water-quality and the fate of pesticides used in the basin. Following the 1972 ban on DDT, concentrations of DDT in fish samples from the Red River of the North quickly decreased. Fish concentrations of DDE and DDD decreased more slowly. Low levels of DDE and DDD were detected in fish 14 years after the DDT ban.
Egan, R; Philippe, M; Wera, L; Fagnard, J F; Vanderheyden, B; Dennis, A; Shi, Y; Cardwell, D A; Vanderbemden, P
2015-02-01
We report the design and construction of a flux extraction device to measure the DC magnetic moment of large samples (i.e., several cm(3)) at cryogenic temperature. The signal is constructed by integrating the electromotive force generated by two coils wound in series-opposition that move around the sample. We show that an octupole expansion of the magnetic vector potential can be used conveniently to treat near-field effects for this geometrical configuration. The resulting expansion is tested for the case of a large, permanently magnetized, type-II superconducting sample. The dimensions of the sensing coils are determined in such a way that the measurement is influenced by the dipole magnetic moment of the sample and not by moments of higher order, within user-determined upper bounds. The device, which is able to measure magnetic moments in excess of 1 A m(2) (1000 emu), is validated by (i) a direct calibration experiment using a small coil driven by a known current and (ii) by comparison with the results of numerical calculations obtained previously using a flux measurement technique. The sensitivity of the device is demonstrated by the measurement of flux-creep relaxation of the magnetization in a large bulk superconductor sample at liquid nitrogen temperature (77 K).
Kittelmann, Sandra; Kirk, Michelle R; Jonker, Arjan; McCulloch, Alan; Janssen, Peter H
2015-11-01
Analysis of rumen microbial community structure based on small-subunit rRNA marker genes in metagenomic DNA samples provides important insights into the dominant taxa present in the rumen and allows assessment of community differences between individuals or in response to treatments applied to ruminants. However, natural animal-to-animal variation in rumen microbial community composition can limit the power of a study considerably, especially when only subtle differences are expected between treatment groups. Thus, trials with large numbers of animals may be necessary to overcome this variation. Because ruminants pass large amounts of rumen material to their oral cavities when they chew their cud, oral samples may contain good representations of the rumen microbiota and be useful in lieu of rumen samples to study rumen microbial communities. We compared bacterial, archaeal, and eukaryotic community structures in DNAs extracted from buccal swabs to those in DNAs from samples collected directly from the rumen by use of a stomach tube for sheep on four different diets. After bioinformatic depletion of potential oral taxa from libraries of samples collected via buccal swabs, bacterial communities showed significant clustering by diet (R = 0.37; analysis of similarity [ANOSIM]) rather than by sampling method (R = 0.07). Archaeal, ciliate protozoal, and anaerobic fungal communities also showed significant clustering by diet rather than by sampling method, even without adjustment for potentially orally associated microorganisms. These findings indicate that buccal swabs may in future allow quick and noninvasive sampling for analysis of rumen microbial communities in large numbers of ruminants. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Attention Bias toward Threat in Pediatric Anxiety Disorders
ERIC Educational Resources Information Center
Roy, Amy Krain; Vasa, Roma A.; Bruck, Maggie; Mogg, Karin; Bradley, Brendan P.; Sweeney, Michael; Bergman, R. Lindsey; McClure-Tone, Erin B.; Pine, Daniel S.
2008-01-01
Attention bias towards threat faces is examined for a large sample of anxiety-disordered youths using visual probe task. The results showed that anxious individuals showed a selective bias towards threat due to perturbation in neural mechanisms that control vigilance.
Visual search by chimpanzees (Pan): assessment of controlling relations.
Tomonaga, M
1995-01-01
Three experimentally sophisticated chimpanzees (Pan), Akira, Chloe, and Ai, were trained on visual search performance using a modified multiple-alternative matching-to-sample task in which a sample stimulus was followed by the search display containing one target identical to the sample and several uniform distractors (i.e., negative comparison stimuli were identical to each other). After they acquired this task, they were tested for transfer of visual search performance to trials in which the sample was not followed by the uniform search display (odd-item search). Akira showed positive transfer of visual search performance to odd-item search even when the display size (the number of stimulus items in the search display) was small, whereas Chloe and Ai showed a transfer only when the display size was large. Chloe and Ai used some nonrelational cues such as perceptual isolation of the target among uniform distractors (so-called pop-out). In addition to the odd-item search test, various types of probe trials were presented to clarify the controlling relations in multiple-alternative matching to sample. Akira showed a decrement of accuracy as a function of the display size when the search display was nonuniform (i.e., each "distractor" stimulus was not the same), whereas Chloe and Ai showed perfect performance. Furthermore, when the sample was identical to the uniform distractors in the search display, Chloe and Ai never selected an odd-item target, but Akira selected it when the display size was large. These results indicated that Akira's behavior was controlled mainly by relational cues of target-distractor oddity, whereas an identity relation between the sample and the target strongly controlled the performance of Chloe and Ai. PMID:7714449
Winkelman, D.L.; Van Den Avyle, M.J.
2002-01-01
The objective of this study was to determine dietary overlap between blueback herring and threadfin shad in J. Strom Thrumond Reservoir, South Carolina/Georgia. We also evaluated prey selectivity for each speices and diet differences between two size categories of blueback herring. Diet and zooplankton samples were collected every other month from April 1992 to February 1994. We examined stomachs containing prey from 170 large blueback herring (>140mm), 96 small blueback herring (<140mm), and 109 threadfin shad, and we also examined 45 zooplankton samples. Large blueback herring diets differed significantly from threadfin shad diets on 11 of 12 sampling dates, and small blueback herring diets differed from threadfin shad diets on all sampling dates. In general, blueback herring consumed proportionally more copepods and fewer Bosmina sp. and rotifers than threadfin shad. Large and small blueback herring diets were significantly different on five of eight sampling dates, primarily due to the tendency of small blueback herring to eat proportionally more Bosmina sp. than large blueback herring. Both blueback herring and threadfin shad fed selectively during some periods of the year. Diet differences between the species may contribute to their coexistence; however, both blueback herring and threadfin shad showed a strong preference for Bosmina sp., increasing the chance that they may negatively influence one another.
2013-01-01
Introduction Small-study effects refer to the fact that trials with limited sample sizes are more likely to report larger beneficial effects than large trials. However, this has never been investigated in critical care medicine. Thus, the present study aimed to examine the presence and extent of small-study effects in critical care medicine. Methods Critical care meta-analyses involving randomized controlled trials and reported mortality as an outcome measure were considered eligible for the study. Component trials were classified as large (≥100 patients per arm) and small (<100 patients per arm) according to their sample sizes. Ratio of odds ratio (ROR) was calculated for each meta-analysis and then RORs were combined using a meta-analytic approach. ROR<1 indicated larger beneficial effect in small trials. Small and large trials were compared in methodological qualities including sequence generating, blinding, allocation concealment, intention to treat and sample size calculation. Results A total of 27 critical care meta-analyses involving 317 trials were included. Of them, five meta-analyses showed statistically significant RORs <1, and other meta-analyses did not reach a statistical significance. Overall, the pooled ROR was 0.60 (95% CI: 0.53 to 0.68); the heterogeneity was moderate with an I2 of 50.3% (chi-squared = 52.30; P = 0.002). Large trials showed significantly better reporting quality than small trials in terms of sequence generating, allocation concealment, blinding, intention to treat, sample size calculation and incomplete follow-up data. Conclusions Small trials are more likely to report larger beneficial effects than large trials in critical care medicine, which could be partly explained by the lower methodological quality in small trials. Caution should be practiced in the interpretation of meta-analyses involving small trials. PMID:23302257
Ukai, Hirohiko; Ohashi, Fumiko; Samoto, Hajime; Fukui, Yoshinari; Okamoto, Satoru; Moriguchi, Jiro; Ezaki, Takafumi; Takada, Shiro; Ikeda, Masayuki
2006-04-01
The present study was initiated to examine the relationship between the workplace concentrations and the estimated highest concentrations in solvent workplaces (SWPs), with special references to enterprise size and types of solvent work. Results of survey conducted in 1010 SWPs in 156 enterprises were taken as a database. Workplace air was sampled at > or = 5 crosses in each SWP following a grid sampling strategy. An additional air was grab-sampled at the site where the worker's exposure was estimated to be highest (estimated highest concentration or EHC). The samples were analyzed for 47 solvents designated by regulation, and solvent concentrations in each sample were summed up by use of additiveness formula. From the workplace concentrations at > or = 5 points, geometric mean and geometric standard deviations were calculated as the representative workplace concentration (RWC) and the indicator of variation in workplace concentration (VWC). Comparison between RWC and EHC in the total of 1010 SWPs showed that EHC was 1.2 (in large enterprises with>300 employees) to 1.7 times [in small to medium (SM) enterprises with < or = 300 employees] greater than RWC. When SWPs were classified into SM enterprises and large enterprises, both RWC and EHC were significantly higher in SM enterprises than in large enterprises. Further comparison by types of solvent work showed that the difference was more marked in printing, surface coating and degreasing/cleaning/wiping SWPs, whereas it was less remarkable in painting SWPs and essentially nil in testing/research laboratories. In conclusion, the present observation as discussed in reference to previous publications suggests that RWC, EHC and the ratio of EHC/WRC varies substantially among different types of solvent work as well as enterprise size, and are typically higher in printing SWPs in SM enterprises.
Accelerating root system phenotyping of seedlings through a computer-assisted processing pipeline.
Dupuy, Lionel X; Wright, Gladys; Thompson, Jacqueline A; Taylor, Anna; Dekeyser, Sebastien; White, Christopher P; Thomas, William T B; Nightingale, Mark; Hammond, John P; Graham, Neil S; Thomas, Catherine L; Broadley, Martin R; White, Philip J
2017-01-01
There are numerous systems and techniques to measure the growth of plant roots. However, phenotyping large numbers of plant roots for breeding and genetic analyses remains challenging. One major difficulty is to achieve high throughput and resolution at a reasonable cost per plant sample. Here we describe a cost-effective root phenotyping pipeline, on which we perform time and accuracy benchmarking to identify bottlenecks in such pipelines and strategies for their acceleration. Our root phenotyping pipeline was assembled with custom software and low cost material and equipment. Results show that sample preparation and handling of samples during screening are the most time consuming task in root phenotyping. Algorithms can be used to speed up the extraction of root traits from image data, but when applied to large numbers of images, there is a trade-off between time of processing the data and errors contained in the database. Scaling-up root phenotyping to large numbers of genotypes will require not only automation of sample preparation and sample handling, but also efficient algorithms for error detection for more reliable replacement of manual interventions.
Dexter, Alex; Race, Alan M; Steven, Rory T; Barnes, Jennifer R; Hulme, Heather; Goodwin, Richard J A; Styles, Iain B; Bunch, Josephine
2017-11-07
Clustering is widely used in MSI to segment anatomical features and differentiate tissue types, but existing approaches are both CPU and memory-intensive, limiting their application to small, single data sets. We propose a new approach that uses a graph-based algorithm with a two-phase sampling method that overcomes this limitation. We demonstrate the algorithm on a range of sample types and show that it can segment anatomical features that are not identified using commonly employed algorithms in MSI, and we validate our results on synthetic MSI data. We show that the algorithm is robust to fluctuations in data quality by successfully clustering data with a designed-in variance using data acquired with varying laser fluence. Finally, we show that this method is capable of generating accurate segmentations of large MSI data sets acquired on the newest generation of MSI instruments and evaluate these results by comparison with histopathology.
OXTR polymorphism in depression and completed suicide-A study on a large population sample.
Wasilewska, Krystyna; Pawlak, Aleksandra; Kostrzewa, Grażyna; Sobczyk-Kopcioł, Agnieszka; Kaczorowska, Aleksandra; Badowski, Jarosław; Brzozowska, Małgorzata; Drygas, Wojciech; Piwoński, Jerzy; Bielecki, Wojciech; Płoski, Rafał
2017-03-01
In the light of contradictory results concerning OXTR polymorphism rs53576 and depression, we decided to verify the potential association between the two on 1) a large, ethnically homogenous sample of 1185 individuals who completed the Beck Depression Inventory (BDI), as well as on 2) a sample of 763 suicide victims. In the population sample, AA males showed significantly lower BDI scores (p=0.005, p cor =0.030). Exploratory analyses suggested that this effect was limited to a subgroup within 0-9 BDI score range (p=0.0007, U-Mann Whitney test), whereas no main effect on depressive symptoms (BDI>9) was found. In the suicide sample no association with rs53576 genotype was present. Exploratory analyses in suicides revealed higher blood alcohol concentration (BAC) among AA than GG/GA males (p=0.014, U-Mann Whitney test). Our results show that the OXTR rs53576 variant modulates the mood in male individuals and may positively correlate with alcohol intake among male suicides, but is not associated with suicide or depression. The study adds to the growing knowledge on rs53576 genotype characteristics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Geochemical and radiological characterization of soils from former radium processing sites
Landa, E.R.
1984-01-01
Soil samples were collected from former radium processing sites in Denver, CO, and East Orange, NJ. Particle-size separations and radiochemical analyses of selected samples showed that while the greatest contents of both 226Ra and U were generally found in the finest (< 45 ??m) fraction, the pattern was not always of progressive increase in radionuclide content with decreasing particle size. Leaching tests on these samples showed a large portion of the 225Ra and U to be soluble in dilute hydrochloric acid. Radon-emanation coefficients measured for bulk samples of contaminated soil were about 20%. Recovery of residual uranium and vanadium, as an adjunct to any remedial action program, appears unlikely due to economic considerations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hay, M. S.
Savannah River National Laboratory analyzed samples from Tank 38H and Tank 43H to support Enrichment Control Program and Corrosion Control Program. The total uranium in the Tank 38H samples ranged from 20.5 to 34.0 mg/L while the Tank 43H samples ranged from 47.6 to 50.6 mg/L. The U-235 percentage ranged from 0.62% to 0.64% over the four samples. The total uranium and percent U-235 results appear consistent with previous Tank 38H and Tank 43H uranium measurements. The Tank 38H plutonium results show a large difference between the surface and sub-surface sample concentrations and a somewhat higher concentration than previous sub-surfacemore » samples. The two Tank 43H samples show similar plutonium concentrations and are within the range of values measured on previous samples. The plutonium results may be biased high due to the presence of plutonium contamination in the blank samples from the cell sample preparations. The four samples analyzed show silicon concentrations ranging from 47.9 to 105 mg/L.« less
Paleobiology and comparative morphology of a late Neandertal sample from El Sidron, Asturias, Spain.
Rosas, Antonio; Martínez-Maza, Cayetana; Bastir, Markus; García-Tabernero, Antonio; Lalueza-Fox, Carles; Huguet, Rosa; Ortiz, José Eugenio; Julià, Ramón; Soler, Vicente; de Torres, Trinidad; Martínez, Enrique; Cañaveras, Juan Carlos; Sánchez-Moral, Sergio; Cuezva, Soledad; Lario, Javier; Santamaría, David; de la Rasilla, Marco; Fortea, Javier
2006-12-19
Fossil evidence from the Iberian Peninsula is essential for understanding Neandertal evolution and history. Since 2000, a new sample approximately 43,000 years old has been systematically recovered at the El Sidrón cave site (Asturias, Spain). Human remains almost exclusively compose the bone assemblage. All of the skeletal parts are preserved, and there is a moderate occurrence of Middle Paleolithic stone tools. A minimum number of eight individuals are represented, and ancient mtDNA has been extracted from dental and osteological remains. Paleobiology of the El Sidrón archaic humans fits the pattern found in other Neandertal samples: a high incidence of dental hypoplasia and interproximal grooves, yet no traumatic lesions are present. Moreover, unambiguous evidence of human-induced modifications has been found on the human remains. Morphologically, the El Sidrón humans show a large number of Neandertal lineage-derived features even though certain traits place the sample at the limits of Neandertal variation. Integrating the El Sidrón human mandibles into the larger Neandertal sample reveals a north-south geographic patterning, with southern Neandertals showing broader faces with increased lower facial heights. The large El Sidrón sample therefore augments the European evolutionary lineage fossil record and supports ecogeographical variability across Neandertal populations.
Stellato, Giuseppina; La Storia, Antonietta; De Filippis, Francesca; Borriello, Giorgia; Villani, Francesco
2016-01-01
ABSTRACT Microbial contamination in food processing plants can play a fundamental role in food quality and safety. The aims of this study were to learn more about the possible influence of the meat processing environment on initial fresh meat contamination and to investigate the differences between small-scale retail distribution (SD) and large-scale retail distribution (LD) facilities. Samples were collected from butcheries (n = 20), including LD (n = 10) and SD (n = 10) facilities, over two sampling campaigns. Samples included fresh beef and pork cuts and swab samples from the knife, the chopping board, and the butcher's hand. The microbiota of both meat samples and environmental swabs were very complex, including more than 800 operational taxonomic units (OTUs) collapsed at the species level. The 16S rRNA sequencing analysis showed that core microbiota were shared by 80% of the samples and included Pseudomonas spp., Streptococcus spp., Brochothrix spp., Psychrobacter spp., and Acinetobacter spp. Hierarchical clustering of the samples based on the microbiota showed a certain separation between meat and environmental samples, with higher levels of Proteobacteria in meat. In particular, levels of Pseudomonas and several Enterobacteriaceae members were significantly higher in meat samples, while Brochothrix, Staphylococcus, lactic acid bacteria, and Psychrobacter prevailed in environmental swab samples. Consistent clustering was also observed when metabolic activities were considered by predictive metagenomic analysis of the samples. An increase in carbohydrate metabolism was predicted for the environmental swabs and was consistently linked to Firmicutes, while increases in pathways related to amino acid and lipid metabolism were predicted for the meat samples and were positively correlated with Proteobacteria. Our results highlighted the importance of the processing environment in contributing to the initial microbial levels of meat and clearly showed that the type of retail facility (LD or SD) did not apparently affect the contamination. IMPORTANCE The study provides an in-depth description of the microbiota of meat and meat processing environments. It highlights the importance of the environment as a contamination source of spoilage bacteria, and it shows that the size of the retail facility does not affect the level and type of contamination. PMID:27129965
NASA Astrophysics Data System (ADS)
Ustione, A.; Cricenti, A.; Piacentini, M.; Felici, A. C.
2006-09-01
A new implementation of a shear-force microscope is described that uses a shear-force detection system to perform topographical imaging of large areas (˜1×1mm2). This implementation finds very interesting application in the study of archeological or artistic samples. Three dc motors are used to move a sample during a scan, allowing the probe tip to follow the surface and to face height differences of several tens of micrometers. This large-area topographical imaging mode exploits new subroutines that were added to the existing homemade software; these subroutines were created in Microsoft VISUAL BASIC 6.0 programming language. With this new feature our shear-force microscope can be used to study topographical details over large areas of archaeological samples in a nondestructive way. We show results detecting worn reliefs over a coin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, Sherrod L.; Culligan, Brian K.; Hutchison, Jay B.
A new rapid fusion method for the determination of plutonium in large rice samples has been developed at the Savannah River National Laboratory (Aiken, SC, USA) that can be used to determine very low levels of plutonium isotopes in rice. The recent accident at Fukushima Nuclear Power Plant in March, 2011 reinforces the need to have rapid, reliable radiochemical analyses for radionuclides in environmental and food samples. Public concern regarding foods, particularly foods such as rice in Japan, highlights the need for analytical techniques that will allow very large sample aliquots of rice to be used for analysis so thatmore » very low levels of plutonium isotopes may be detected. The new method to determine plutonium isotopes in large rice samples utilizes a furnace ashing step, a rapid sodium hydroxide fusion method, a lanthanum fluoride matrix removal step, and a column separation process with TEVA Resin cartridges. The method can be applied to rice sample aliquots as large as 5 kg. Plutonium isotopes can be determined using alpha spectrometry or inductively-coupled plasma mass spectrometry (ICP-MS). The method showed high chemical recoveries and effective removal of interferences. The rapid fusion technique is a rugged sample digestion method that ensures that any refractory plutonium particles are effectively digested. The MDA for a 5 kg rice sample using alpha spectrometry is 7E-5 mBq g{sup -1}. The method can easily be adapted for use by ICP-MS to allow detection of plutonium isotopic ratios.« less
Neeser, Rudolph; Ackermann, Rebecca Rogers; Gain, James
2009-09-01
Various methodological approaches have been used for reconstructing fossil hominin remains in order to increase sample sizes and to better understand morphological variation. Among these, morphometric quantitative techniques for reconstruction are increasingly common. Here we compare the accuracy of three approaches--mean substitution, thin plate splines, and multiple linear regression--for estimating missing landmarks of damaged fossil specimens. Comparisons are made varying the number of missing landmarks, sample sizes, and the reference species of the population used to perform the estimation. The testing is performed on landmark data from individuals of Homo sapiens, Pan troglodytes and Gorilla gorilla, and nine hominin fossil specimens. Results suggest that when a small, same-species fossil reference sample is available to guide reconstructions, thin plate spline approaches perform best. However, if no such sample is available (or if the species of the damaged individual is uncertain), estimates of missing morphology based on a single individual (or even a small sample) of close taxonomic affinity are less accurate than those based on a large sample of individuals drawn from more distantly related extant populations using a technique (such as a regression method) able to leverage the information (e.g., variation/covariation patterning) contained in this large sample. Thin plate splines also show an unexpectedly large amount of error in estimating landmarks, especially over large areas. Recommendations are made for estimating missing landmarks under various scenarios. Copyright 2009 Wiley-Liss, Inc.
Likelihood inference of non-constant diversification rates with incomplete taxon sampling.
Höhna, Sebastian
2014-01-01
Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species) and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family) and thus to maximize diversity (diversified sampling). So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent) diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa). The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa). Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model). Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species). All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear to be violated.
Heritability of metabolic syndrome traits in a large population-based sample[S
van Dongen, Jenny; Willemsen, Gonneke; Chen, Wei-Min; de Geus, Eco J. C.; Boomsma, Dorret I.
2013-01-01
Heritability estimates of metabolic syndrome traits vary widely across studies. Some studies have suggested that the contribution of genes may vary with age or sex. We estimated the heritability of 11 metabolic syndrome-related traits and height as a function of age and sex in a large population-based sample of twin families (N = 2,792–27,021, for different traits). A moderate-to-high heritability was found for all traits [from H2 = 0.47 (insulin) to H2 = 0.78 (BMI)]. The broad-sense heritability (H2) showed little variation between age groups in women; it differed somewhat more in men (e.g., for glucose, H2 = 0.61 in young females, H2 = 0.56 in older females, H2 = 0.64 in young males, and H2= 0.27 in older males). While nonadditive genetic effects explained little variation in the younger subjects, nonadditive genetic effects became more important at a greater age. Our findings show that in an unselected sample (age range, ∼18–98 years), the genetic contribution to individual differences in metabolic syndrome traits is moderate to large in both sexes and across age. Although the prevalence of the metabolic syndrome has greatly increased in the past decades due to lifestyle changes, our study indicates that most of the variation in metabolic syndrome traits between individuals is due to genetic differences. PMID:23918046
Methods to increase reproducibility in differential gene expression via meta-analysis
Sweeney, Timothy E.; Haynes, Winston A.; Vallania, Francesco; Ioannidis, John P.; Khatri, Purvesh
2017-01-01
Findings from clinical and biological studies are often not reproducible when tested in independent cohorts. Due to the testing of a large number of hypotheses and relatively small sample sizes, results from whole-genome expression studies in particular are often not reproducible. Compared to single-study analysis, gene expression meta-analysis can improve reproducibility by integrating data from multiple studies. However, there are multiple choices in designing and carrying out a meta-analysis. Yet, clear guidelines on best practices are scarce. Here, we hypothesized that studying subsets of very large meta-analyses would allow for systematic identification of best practices to improve reproducibility. We therefore constructed three very large gene expression meta-analyses from clinical samples, and then examined meta-analyses of subsets of the datasets (all combinations of datasets with up to N/2 samples and K/2 datasets) compared to a ‘silver standard’ of differentially expressed genes found in the entire cohort. We tested three random-effects meta-analysis models using this procedure. We showed relatively greater reproducibility with more-stringent effect size thresholds with relaxed significance thresholds; relatively lower reproducibility when imposing extraneous constraints on residual heterogeneity; and an underestimation of actual false positive rate by Benjamini–Hochberg correction. In addition, multivariate regression showed that the accuracy of a meta-analysis increased significantly with more included datasets even when controlling for sample size. PMID:27634930
A fast learning method for large scale and multi-class samples of SVM
NASA Astrophysics Data System (ADS)
Fan, Yu; Guo, Huiming
2017-06-01
A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hay, M.; Coleman, C.; Diprete, D.
SRNL analyzed samples from Tank 38H and Tank 43H to support ECP and CCP. The total uranium in the Tank 38H surface sample was 41.3 mg/L while the sub-surface sample was 43.5 mg/L. The Tank 43H samples contained total uranium concentrations of 28.5 mg/L in the surface sample and 28.1 mg/L in the sub-surface sample. The U-235 percentage ranged from 0.62% to 0.63% for the Tank 38H samples and Tank 43H samples. The total uranium and percent U-235 results in the table appear slightly lower than recent Tank 38H and Tank 43H uranium measurements. The plutonium results in the tablemore » show a large difference between the surface and sub-surface sample concentrations for Tank 38H. The Tank 43H plutonium results closely match the range of values measured on previous samples. The Cs-137 results for the Tank 38H surface and sub-surface samples show similar concentrations slightly higher than the concentrations measured in recent samples. The Cs-137 results for the two Tank 43H samples also show similar concentrations within the range of values measured on previous samples. The four samples show silicon concentrations somewhat lower than the previous samples with values ranging from 124 to 168 mg/L.« less
Computer Use and Factors Related to Computer Use in Large Independent Secondary School Libraries.
ERIC Educational Resources Information Center
Currier, Heidi F.
Survey results about the use of computers in independent secondary school libraries are reported, and factors related to the presence of computers are identified. Data are from 104 librarians responding to a questionnaire sent to a sample of 136 large (over 400 students) independent secondary schools. Data are analyzed descriptively to show the…
Laibson, David; Mollerstrom, Johanna
2012-01-01
Bernanke (2005) hypothesized that a “global savings glut” was causing large trade imbalances. However, we show that the global savings rates did not show a robust upward trend during the relevant period. Moreover, if there had been a global savings glut there should have been a large investment boom in the countries that imported capital. Instead, those countries experienced consumption booms. National asset bubbles explain the international imbalances. The bubbles raised consumption, resulting in large trade deficits. In a sample of 18 OECD countries plus China, movements in home prices alone explain half of the variation in trade deficits. PMID:23750045
Widespread White Matter Differences in Children and Adolescents with Autism Spectrum Disorder
ERIC Educational Resources Information Center
Vogan, V. M.; Morgan, B. R.; Leung, R. C.; Anagnostou, E.; Doyle-Thomas, K.; Taylor, M. J.
2016-01-01
Diffusion tensor imaging studies show white matter (WM) abnormalities in children with autism spectrum disorder (ASD). However, investigations are often limited by small samples, particularly problematic given the heterogeneity of ASD. We explored WM using DTI in a large sample of 130 children and adolescents (7-15 years) with and without ASD,…
Characterization of HgCdTe and Related Materials For Third Generation Infrared Detectors
NASA Astrophysics Data System (ADS)
Vaghayenegar, Majid
Hg1-xCdxTe (MCT) has historically been the primary material used for infrared detectors. Recently, alternative substrates for MCT growth such as Si, as well as alternative infrared materials such as Hg1-xCdxSe, have been explored. This dissertation involves characterization of Hg-based infrared materials for third generation infrared detectors using a wide range of transmission electron microscopy (TEM) techniques. A microstructural study on HgCdTe/CdTe heterostructures grown by MBE on Si (211) substrates showed a thin ZnTe layer grown between CdTe and Si to mediate the large lattice mismatch of 19.5%. Observations showed large dislocation densities at the CdTe/ZnTe/Si (211) interfaces, which dropped off rapidly away from the interface. Growth of a thin HgTe buffer layer between HgCdTe and CdTe layers seemed to improve the HgCdTe layer quality by blocking some defects. A second study investigated the correlation of etch pits and dislocations in as-grown and thermal-cycle-annealed (TCA) HgCdTe (211) films. For as-grown samples, pits with triangular and fish-eye shapes were associated with Frank partial and perfect dislocations, respectively. Skew pits were determined to have a more complex nature. TCA reduced the etch-pit density by 72%. Although TCA processing eliminated the fish-eye pits, dislocations reappeared in shorter segments in the TCA samples. Large pits were observed in both as-grown and TCA samples, but the nature of any defects associated with these pits in the as-grown samples is unclear. Microstructural studies of HgCdSe revealed large dislocation density at ZnTe/Si(211) interfaces, which dropped off markedly with ZnTe thickness. Atomic-resolution STEM images showed that the large lattice mismatch at the ZnTe/Si interface was accommodated through {111}-type stacking faults. A detailed analysis showed that the stacking faults were inclined at angles of 19.5 and 90 degrees at both ZnTe/Si and HgCdSe/ZnTe interfaces. These stacking faults were associated with Shockley and Frank partial dislocations, respectively. Initial attempts to delineate individual dislocations by chemical etching revealed that while the etchants successfully attacked defective areas, many defects in close proximity to the pits were unaffected.
Doppler Global Velocimeter Development for the Large Wind Tunnels at Ames Research Center
NASA Technical Reports Server (NTRS)
Reinath, Michael S.
1997-01-01
Development of an optical, laser-based flow-field measurement technique for large wind tunnels is described. The technique uses laser sheet illumination and charged coupled device detectors to rapidly measure flow-field velocity distributions over large planar regions of the flow. Sample measurements are presented that illustrate the capability of the technique. An analysis of measurement uncertainty, which focuses on the random component of uncertainty, shows that precision uncertainty is not dependent on the measured velocity magnitude. For a single-image measurement, the analysis predicts a precision uncertainty of +/-5 m/s. When multiple images are averaged, this uncertainty is shown to decrease. For an average of 100 images, for example, the analysis shows that a precision uncertainty of +/-0.5 m/s can be expected. Sample applications show that vectors aligned with an orthogonal coordinate system are difficult to measure directly. An algebraic transformation is presented which converts measured vectors to the desired orthogonal components. Uncertainty propagation is then used to show how the uncertainty propagates from the direct measurements to the orthogonal components. For a typical forward-scatter viewing geometry, the propagation analysis predicts precision uncertainties of +/-4, +/-7, and +/-6 m/s, respectively, for the U, V, and W components at 68% confidence.
Ground-water quality beneath irrigated agriculture in the central High Plains aquifer, 1999-2000
Bruce, Breton W.; Becker, Mark F.; Pope, Larry M.; Gurdak, Jason J.
2003-01-01
In 1999 and 2000, 30 water-quality monitoring wells were installed in the central High Plains aquifer to evaluate the quality of recently recharged ground water in areas of irrigated agriculture and to identify the factors affecting ground-water quality. Wells were installed adjacent to irrigated agricultural fields with 10- or 20-foot screened intervals placed near the water table. Each well was sampled once for about 100 waterquality constituents associated with agricultural practices. Water samples from 70 percent of the wells (21 of 30 sites) contained nitrate concentrations larger than expected background concentrations (about 3 mg/L as N) and detectable pesticides. Atrazine or its metabolite, deethylatrazine, were detected with greater frequency than other pesticides and were present in all 21 samples where pesticides were detected. The 21 samples with detectable pesticides also contained tritium concentrations large enough to indicate that at least some part of the water sample had been recharged within about the last 50 years. These 21 ground-water samples are considered to show water-quality effects related to irrigated agriculture. The remaining 9 groundwater samples contained no pesticides, small tritium concentrations, and nitrate concentrations less than 3.45 milligrams per liter as nitrogen. These samples are considered unaffected by the irrigated agricultural land-use setting. Nitrogen isotope ratios indicate that commercial fertilizer was the dominant source of nitrate in 13 of the 21 samples affected by irrigated agriculture. Nitrogen isotope ratios for 4 of these 21 samples were indicative of an animal waste source. Dissolved-solids concentrations were larger in samples affected by irrigated agriculture, with large sulfate concentrations having strong correlation with large dissolved solids concentrations in these samples. A strong statistical correlation is shown between samples affected by irrigated agriculture and sites with large rates of pesticide and nitrogen applications and shallow depths to ground water.
Geochemical and radiological characterization of soils from former radium processing sites.
Landa, E R
1984-02-01
Soil samples were collected from former radium processing sites in Denver, CO, and East Orange, NJ. Particle-size separations and radiochemical analyses of selected samples showed that while the greatest contents of both 226Ra and U were generally found in the finest (less than 45 micron) fraction, the pattern was not always of progressive increase in radionuclide content with decreasing particle size. Leaching tests on these samples showed a large portion of the 226Ra and U to be soluble in dilute hydrochloric acid. Radon-emanation coefficients measured for bulk samples of contaminated soil were about 20%. Recovery of residual uranium and vanadium, as an adjunct to any remedial action program, appears unlikely due to economic considerations.
Mäkinen, Outi E; Zannini, Emanuele; Arendt, Elke K
2015-09-01
Heat-denaturation of quinoa protein isolate (QPI) at alkali pH and its influence on the physicochemical and cold gelation properties was investigated. Heating QPI at pH 8.5 led to increased surface hydrophobicity and decreases in free and bound sulfhydryl group contents. Heating at pH 10.5 caused a lesser degree of changes in sulfhydryl groups and surface hydrophobicity, and the resulting solutions showed drastically increased solubility. SDS PAGE revealed the presence of large aggregates only in the sample heated at pH 8.5, suggesting that any aggregates present in the sample heated at pH 10.5 were non-covalently bound and disintegrated in the presence of SDS. Reducing conditions partially dissolved the aggregates in the pH 8.5 heated sample indicating the occurrence of disulphide bonding, but caused no major alterations in the separation pattern of the pH 10.5 heated sample. Denaturation pH influenced the cold gelation properties greatly. Solutions heated at pH 8.5 formed a coarse coagulum with maximum G' of 5 Pa. Heat-denaturation at 10.5 enabled the proteins to form a finer and regularly structured gel with a maximum G' of 1140 Pa. Particle size analysis showed that the pH 10.5 heated sample contained a higher level of very small particles (0.1-2 μm), and these readily aggregated into large particles (30-200 μm) when pH was lowered to 5.5. Differences in the nature of aggregates formed during heating may explain the large variation in gelation properties.
Gibbs sampling on large lattice with GMRF
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Allard, Denis
2018-02-01
Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.
Preparation of highly multiplexed small RNA sequencing libraries.
Persson, Helena; Søkilde, Rolf; Pirona, Anna Chiara; Rovira, Carlos
2017-08-01
MicroRNAs (miRNAs) are ~22-nucleotide-long small non-coding RNAs that regulate the expression of protein-coding genes by base pairing to partially complementary target sites, preferentially located in the 3´ untranslated region (UTR) of target mRNAs. The expression and function of miRNAs have been extensively studied in human disease, as well as the possibility of using these molecules as biomarkers for prognostication and treatment guidance. To identify and validate miRNAs as biomarkers, their expression must be screened in large collections of patient samples. Here, we develop a scalable protocol for the rapid and economical preparation of a large number of small RNA sequencing libraries using dual indexing for multiplexing. Combined with the use of off-the-shelf reagents, more samples can be sequenced simultaneously on large-scale sequencing platforms at a considerably lower cost per sample. Sample preparation is simplified by pooling libraries prior to gel purification, which allows for the selection of a narrow size range while minimizing sample variation. A comparison with publicly available data from benchmarking of miRNA analysis platforms showed that this method captures absolute and differential expression as effectively as commercially available alternatives.
NASA Astrophysics Data System (ADS)
Prothero, Donald R.; Syverson, Valerie J.; Raymond, Kristina R.; Madan, Meena; Molina, Sarah; Fragomeni, Ashley; DeSantis, Sylvana; Sutyagina, Anastasiya; Gage, Gina L.
2012-11-01
Conventional neo-Darwinian theory views organisms as infinitely sensitive and responsive to their environments, and considers them able to readily change size or shape when they adapt to selective pressures. Yet since 1863 it has been well known that Pleistocene animals and plants do not show much morphological change or speciation in response to the glacial-interglacial climate cycles. We tested this hypothesis with all of the common birds (condors, golden and bald eagles, turkeys, caracaras) and mammals (dire wolves, saber-toothed cats, giant lions, horses, camels, bison, and ground sloths) from Rancho La Brea tar pits in Los Angeles, California, which preserves large samples of many bones from many well-dated pits spanning the 35,000 years of the Last Glacial-Interglacial cycle. Pollen evidence showed the climate changed from chaparral/oaks 35,000 years ago to snowy piñon-juniper forests at the peak glacial 20,000 years ago, then back to the modern chaparral since the glacial-interglacial transition. Based on Bergmann's rule, we would expect peak glacial specimens to have larger body sizes, and based on Allen's rule, peak glacial samples should have shorter and more robust limbs. Yet statistical analysis (ANOVA for parametric samples; Kruskal-Wallis test for non-parametric samples) showed that none of the Pleistocene pit samples is statistically distinct from the rest, indicating complete stasis from 35 ka to 9 ka. The sole exception was the Pit 13 sample of dire wolves (16 ka), which was significantly smaller than the rest, but this did not occur in response to climate change. We also performed a time series analysis of the pit samples. None showed directional change; all were either static or showed a random walk. Thus, the data show that birds and mammals at Rancho La Brea show complete stasis and were unresponsive to the major climate change that occurred at 20 ka, consistent with other studies of Pleistocene animals and plants. Most explanations for such stasis (stabilizing selection, canalization) fail in this setting where climate is changing. One possible explanation is that most large birds and mammals are very broadly adapted and relatively insensitive to changes in their environments, although even the small mammals of the Pleistocene show stasis during climate change, too.
Excellent field emission properties of vertically oriented CuO nanowire films
NASA Astrophysics Data System (ADS)
Feng, Long; Yan, Hui; Li, Heng; Zhang, Rukang; Li, Zhe; Chi, Rui; Yang, Shuaiyu; Ma, Yaya; Fu, Bin; Liu, Jiwen
2018-04-01
Oriented CuO nanowire films were synthesized on a large scale using simple method of direct heating copper grids in air. The field emission properties of the sample can be enhanced by improving the aspect ratio of the nanowires just through a facile method of controlling the synthesis conditions. Although the density of the nanowires is large enough, the screen effect is not an important factor in this field emission process because few nanowires sticking out above the rest. Benefiting from the unique geometrical and structural features, the CuO nanowire samples show excellent field emission (FE) properties. The FE measurements of CuO nanowire films illustrate that the sample synthesized at 500 °C for 8 h has a comparatively low turn-on field of 0.68 V/μm, a low threshold field of 1.1 V/μm, and a large field enhancement factor β of 16782 (a record high value for CuO nanostructures, to the best of our knowledge), indicating that the samples are promising candidates for field emission applications.
The functional spectrum of low-frequency coding variation.
Marth, Gabor T; Yu, Fuli; Indap, Amit R; Garimella, Kiran; Gravel, Simon; Leong, Wen Fung; Tyler-Smith, Chris; Bainbridge, Matthew; Blackwell, Tom; Zheng-Bradley, Xiangqun; Chen, Yuan; Challis, Danny; Clarke, Laura; Ball, Edward V; Cibulskis, Kristian; Cooper, David N; Fulton, Bob; Hartl, Chris; Koboldt, Dan; Muzny, Donna; Smith, Richard; Sougnez, Carrie; Stewart, Chip; Ward, Alistair; Yu, Jin; Xue, Yali; Altshuler, David; Bustamante, Carlos D; Clark, Andrew G; Daly, Mark; DePristo, Mark; Flicek, Paul; Gabriel, Stacey; Mardis, Elaine; Palotie, Aarno; Gibbs, Richard
2011-09-14
Rare coding variants constitute an important class of human genetic variation, but are underrepresented in current databases that are based on small population samples. Recent studies show that variants altering amino acid sequence and protein function are enriched at low variant allele frequency, 2 to 5%, but because of insufficient sample size it is not clear if the same trend holds for rare variants below 1% allele frequency. The 1000 Genomes Exon Pilot Project has collected deep-coverage exon-capture data in roughly 1,000 human genes, for nearly 700 samples. Although medical whole-exome projects are currently afoot, this is still the deepest reported sampling of a large number of human genes with next-generation technologies. According to the goals of the 1000 Genomes Project, we created effective informatics pipelines to process and analyze the data, and discovered 12,758 exonic SNPs, 70% of them novel, and 74% below 1% allele frequency in the seven population samples we examined. Our analysis confirms that coding variants below 1% allele frequency show increased population-specificity and are enriched for functional variants. This study represents a large step toward detecting and interpreting low frequency coding variation, clearly lays out technical steps for effective analysis of DNA capture data, and articulates functional and population properties of this important class of genetic variation.
Classical boson sampling algorithms with superior performance to near-term experiments
NASA Astrophysics Data System (ADS)
Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony
2017-12-01
It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.
Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S
2015-02-01
With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.
Pliocene large-mammal assemblages from northern Chad: sampling and ecological structure
NASA Astrophysics Data System (ADS)
Fara, Emmanuel; Likius, Andossa; Mackaye, Hassane T.; Vignaud, Patrick; Brunet, Michel
2005-11-01
Numerous Pliocene large-mammal assemblages have been discovered in Chad over the last decade. They offer a unique opportunity to understand the settings in which important chapters of Hominid evolution took place in Central Africa. However, it is crucial to first investigate both sampling and taxonomic homogeneity for these Chadian assemblages because they occur over large sectors in a sandy desert that offers virtually no stratigraphic section. Using cluster analysis and ordination techniques, we show that the three Pliocene sectors from Chad are homogeneous and adequate sampling units. Previous stable isotope analyses on these assemblages have indicated that the environment became richer in C4 plants between approximately 5.3 and 3.5 3 Ma. To test whether this environmental change has affected the structure of palaeo-communities, we assigned body mass, trophic and locomotor eco-variables to mammal species from the three sectors. Statistical analysis shows that the overall ecological structure of the assemblages is not linked with the opening of the plant cover, and eco-variables show no temporal trend from the oldest sector to the youngest. For example, there is no significant change in the relative diversity of grazing and browsing taxa, although mixed feeders are less diversified in the youngest sector than in the preceding one. This pattern apparently does not result from potential biases such as methodological artefacts or taphonomic imprint. Instead, it seems that local heterogeneous environmental factors have played a major role in shaping the ecological spectrum of Chadian mammal palaeo-communities during the Pliocene.
The Surface Chemical Composition of Lunar Samples and Its Significance for Optical Properties
NASA Technical Reports Server (NTRS)
Gold, T.; Bilson, E.; Baron, R. L.
1976-01-01
The surface iron, titanium, calcium, and silicon concentration in numerous lunar soil and rock samples was determined by Auger electron spectroscopy. All soil samples show a large increase in the iron to oxygen ratio compared with samples of pulverized rock or with results of the bulk chemical analysis. A solar wind simulation experiment using 2 keV energy alpha -particles showed that an ion dose corresponding to approximately 30,000 years of solar wind increased the iron concentration on the surface of the pulverized Apollo 14 rock sample 14310 to the concentration measured in the Apollo 14 soil sample 14163, and the albedo of the pulverized rock decreased from 0.36 to 0.07. The low albedo of the lunar soil is related to the iron + titanium concentration on its surface. A solar wind sputter reduction mechanism is discussed as a possible cause for both the surface chemical and optical properties of the soil.
Estimating Animal Abundance in Ground Beef Batches Assayed with Molecular Markers
Hu, Xin-Sheng; Simila, Janika; Platz, Sindey Schueler; Moore, Stephen S.; Plastow, Graham; Meghen, Ciaran N.
2012-01-01
Estimating animal abundance in industrial scale batches of ground meat is important for mapping meat products through the manufacturing process and for effectively tracing the finished product during a food safety recall. The processing of ground beef involves a potentially large number of animals from diverse sources in a single product batch, which produces a high heterogeneity in capture probability. In order to estimate animal abundance through DNA profiling of ground beef constituents, two parameter-based statistical models were developed for incidence data. Simulations were applied to evaluate the maximum likelihood estimate (MLE) of a joint likelihood function from multiple surveys, showing superiority in the presence of high capture heterogeneity with small sample sizes, or comparable estimation in the presence of low capture heterogeneity with a large sample size when compared to other existing models. Our model employs the full information on the pattern of the capture-recapture frequencies from multiple samples. We applied the proposed models to estimate animal abundance in six manufacturing beef batches, genotyped using 30 single nucleotide polymorphism (SNP) markers, from a large scale beef grinding facility. Results show that between 411∼1367 animals were present in six manufacturing beef batches. These estimates are informative as a reference for improving recall processes and tracing finished meat products back to source. PMID:22479559
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Parent-Reported Feeding and Feeding Problems in a Sample of Dutch Toddlers
ERIC Educational Resources Information Center
de Moor, Jan; Didden, Robert; Korzilius, Hubert
2007-01-01
Little is known about the feeding behaviors and problems with feeding in toddlers. In the present questionnaire study, data were collected on the feeding behaviors and feeding problems in a relatively large (n = 422) sample of Dutch healthy toddlers (i.e. 18-36 months old) who lived at home with their parents. Results show that three meals a day…
Potential genotoxic effects of melted snow from an urban area revealed by the Allium cepa test.
Blagojević, Jelena; Stamenković, Gorana; Vujosević, Mladen
2009-09-01
The presence of well-known atmospheric pollutants is regularly screened for in large towns but knowledge about the effects of mixtures of different pollutants and especially their genotoxic potential is largely missing. Since falling snow collects pollutants from the air, melted snow samples could be suitable for evaluating potential genotoxicity. For this purpose the Allium cepa anaphase-telophase test was used to analyse melted snow samples from Belgrade, the capital city of Serbia. Samples of snow were taken at two sites, characterized by differences in pollution intensity, in three successive years. At the more polluted site the analyses showed a very high degree of both toxicity and genotoxicity in the first year of the study corresponding to the effects of the known mutagen used as the positive control. At the other site the situation was much better but not without warning signals. The results showed that standard analyses for the presence of certain contaminants in the air do not give an accurate picture of the possible consequences of urban air pollution because the genotoxic potential remains hidden. The A. cepa test has been demonstrated to be very convenient for evaluation of air pollution through analyses of melted snow samples.
Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization.
Glaser, Joshua I; Zamft, Bradley M; Church, George M; Kording, Konrad P
2015-01-01
Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.
NASA Astrophysics Data System (ADS)
Berkovits, Richard
2018-03-01
The properties of the low-lying eigenvalues of the entanglement Hamiltonian and their relation to the localization length of a disordered interacting one-dimensional many-particle system are studied. The average of the first entanglement Hamiltonian level spacing is proportional to the ground-state localization length and shows the same dependence on the disorder and interaction strength as the localization length. This is the result of the fact that entanglement is limited to distances of order of the localization length. The distribution of the first entanglement level spacing shows a Gaussian-type behavior as expected for level spacings much larger than the disorder broadening. For weakly disordered systems (localization length larger than sample length), the distribution shows an additional peak at low-level spacings. This stems from rare regions in some samples which exhibit metalliclike behavior of large entanglement and large particle-number fluctuations. These intermediate microemulsion metallic regions embedded in the insulating phase are discussed.
Evaluation of the Biological Sampling Kit (BiSKit) for Large-Area Surface Sampling
Buttner, Mark P.; Cruz, Patricia; Stetzenbach, Linda D.; Klima-Comba, Amy K.; Stevens, Vanessa L.; Emanuel, Peter A.
2004-01-01
Current surface sampling methods for microbial contaminants are designed to sample small areas and utilize culture analysis. The total number of microbes recovered is low because a small area is sampled, making detection of a potential pathogen more difficult. Furthermore, sampling of small areas requires a greater number of samples to be collected, which delays the reporting of results, taxes laboratory resources and staffing, and increases analysis costs. A new biological surface sampling method, the Biological Sampling Kit (BiSKit), designed to sample large areas and to be compatible with testing with a variety of technologies, including PCR and immunoassay, was evaluated and compared to other surface sampling strategies. In experimental room trials, wood laminate and metal surfaces were contaminated by aerosolization of Bacillus atrophaeus spores, a simulant for Bacillus anthracis, into the room, followed by settling of the spores onto the test surfaces. The surfaces were sampled with the BiSKit, a cotton-based swab, and a foam-based swab. Samples were analyzed by culturing, quantitative PCR, and immunological assays. The results showed that the large surface area (1 m2) sampled with the BiSKit resulted in concentrations of B. atrophaeus in samples that were up to 10-fold higher than the concentrations obtained with the other methods tested. A comparison of wet and dry sampling with the BiSKit indicated that dry sampling was more efficient (efficiency, 18.4%) than wet sampling (efficiency, 11.3%). The sensitivities of detection of B. atrophaeus on metal surfaces were 42 ± 5.8 CFU/m2 for wet sampling and 100.5 ± 10.2 CFU/m2 for dry sampling. These results demonstrate that the use of a sampling device capable of sampling larger areas results in higher sensitivity than that obtained with currently available methods and has the advantage of sampling larger areas, thus requiring collection of fewer samples per site. PMID:15574898
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iwamoto, A.; Mito, T.; Takahata, K.
Heat transfer of large copper plates (18 x 76 mm) in liquid helium has been measured as a function of orientation and treatment of the heat transfer surface. The results relate to applications of large scale superconductors. In order to clarify the influence of the area where the surface treatment peels off, the authors studied five types of heat transfer surface areas including: (a) 100% polished copper sample, (b) and (c) two 50% oxidized copper samples having different patterns of oxidation, (d) 75% oxidized copper sample, (e) 90% oxidized copper sample, and (f) 100% oxidized copper sample. They observed thatmore » the critical heat flux depends on the heat transfer surface orientation. The critical heat flux is a maximum at angles of 0{degrees} - 30{degrees} and decreases monotonically with increasing angles above 30{degrees}, where the angle is taken in reference to the horizontal axis. On the other hand, the minimum heat flux is less dependent on the surface orientation. More than 75% oxidation on the surface makes the critical heat flux increase. The minimum heat fluxes of the 50 and 90% oxidized Cu samples approximately agree with that of the 100% oxidized Cu sample. Experiments and calculations show that the critical and the minimum heat fluxes are a bilinear function of the fraction of oxidized surface area.« less
Roeder, Peter; Gofton, Emma; Thornber, Carl
2006-01-01
The volume %, distribution, texture and composition of coexisting olivine, Cr-spinel and glass has been determined in quenched lava samples from Hawaii, Iceland and mid-oceanic ridges. The volume ratio of olivine to spinel varies from 60 to 2800 and samples with >0·02% spinel have a volume ratio of olivine to spinel of approximately 100. A plot of wt % MgO vs ppm Cr for natural and experimental basaltic glasses suggests that the general trend of the glasses can be explained by the crystallization of a cotectic ratio of olivine to spinel of about 100. One group of samples has an olivine to spinel ratio of approximately 100, with skeletal olivine phenocrysts and small (<50 μm) spinel crystals that tend to be spatially associated with the olivine phenocrysts. The large number of spinel crystals included within olivine phenocrysts is thought to be due to skeletal olivine phenocrysts coming into physical contact with spinel by synneusis during the chaotic conditions of ascent and extrusion. A second group of samples tend to have large olivine phenocrysts relatively free of included spinel, a few large (>100 μm) spinel crystals that show evidence of two stages of growth, and a volume ratio of olivine to spinel of 100 to well over 1000. The olivine and spinel in this group have crystallized more slowly with little physical interaction, and show evidence that they have accumulated in a magma chamber.
Solution-based circuits enable rapid and multiplexed pathogen detection.
Lam, Brian; Das, Jagotamoy; Holmes, Richard D; Live, Ludovic; Sage, Andrew; Sargent, Edward H; Kelley, Shana O
2013-01-01
Electronic readout of markers of disease provides compelling simplicity, sensitivity and specificity in the detection of small panels of biomarkers in clinical samples; however, the most important emerging tests for disease, such as infectious disease speciation and antibiotic-resistance profiling, will need to interrogate samples for many dozens of biomarkers. Electronic readout of large panels of markers has been hampered by the difficulty of addressing large arrays of electrode-based sensors on inexpensive platforms. Here we report a new concept--solution-based circuits formed on chip--that makes highly multiplexed electrochemical sensing feasible on passive chips. The solution-based circuits switch the information-carrying signal readout channels and eliminate all measurable crosstalk from adjacent, biomolecule-specific microsensors. We build chips that feature this advance and prove that they analyse unpurified samples successfully, and accurately classify pathogens at clinically relevant concentrations. We also show that signature molecules can be accurately read 2 minutes after sample introduction.
Simulation of Particle Size Effect on Dynamic Properties and Fracture of PTFE-W-Al Composites
NASA Astrophysics Data System (ADS)
Herbold, Eric; Cai, Jing; Benson, David; Nesterenko, Vitali
2007-06-01
Recent investigations of the dynamic compressive strength of cold isostatically pressed (CIP) composites of polytetrafluoroethylene (PTFE), tungsten and aluminum powders show significant differences depending on the size of metallic particles. PTFE and aluminum mixtures are known to be energetic under dynamic and thermal loading. The addition of tungsten increases density and overall strength of the sample. Multi-material Eulerian and arbitrary Lagrangian-Eulerian methods were used for the investigation due to the complexity of the microstructure, relatively large deformations and the ability to handle the formation of free surfaces in a natural manner. The calculations indicate that the observed dependence of sample strength on particle size is due to the formation of force chains under dynamic loading in samples with small particle sizes even at larger porosity in comparison with samples with large grain size and larger density.
Some connections between importance sampling and enhanced sampling methods in molecular dynamics.
Lie, H C; Quer, J
2017-11-21
In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.
Some connections between importance sampling and enhanced sampling methods in molecular dynamics
NASA Astrophysics Data System (ADS)
Lie, H. C.; Quer, J.
2017-11-01
In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.
The ESO Diffuse Interstellar Band Large Exploration Survey (EDIBLES)
NASA Astrophysics Data System (ADS)
Cami, J.; Cox, N. L.; Farhang, A.; Smoker, J.; Elyajouri, M.; Lallement, R.; Bacalla, X.; Bhatt, N. H.; Bron, E.; Cordiner, M. A.; de Koter, A..; Ehrenfreund, P.; Evans, C.; Foing, B. H.; Javadi, A.; Joblin, C.; Kaper, L.; Khosroshahi, H. G.; Laverick, M.; Le Petit, F..; Linnartz, H.; Marshall, C. C.; Monreal-Ibero, A.; Mulas, G.; Roueff, E.; Royer, P.; Salama, F.; Sarre, P. J.; Smith, K. T.; Spaans, M.; van Loon, J. T..; Wade, G.
2018-03-01
The ESO Diffuse Interstellar Band Large Exploration Survey (EDIBLES) is a Large Programme that is collecting high-signal-to-noise (S/N) spectra with UVES of a large sample of O and B-type stars covering a large spectral range. The goal of the programme is to extract a unique sample of high-quality interstellar spectra from these data, representing different physical and chemical environments, and to characterise these environments in great detail. An important component of interstellar spectra is the diffuse interstellar bands (DIBs), a set of hundreds of unidentified interstellar absorption lines. With the detailed line-of-sight information and the high-quality spectra, EDIBLES will derive strong constraints on the potential DIB carrier molecules. EDIBLES will thus guide the laboratory experiments necessary to identify these interstellar “mystery molecules”, and turn DIBs into powerful diagnostics of their environments in our Milky Way Galaxy and beyond. We present some preliminary results showing the unique capabilities of the EDIBLES programme.
Stellato, Giuseppina; La Storia, Antonietta; De Filippis, Francesca; Borriello, Giorgia; Villani, Francesco; Ercolini, Danilo
2016-07-01
Microbial contamination in food processing plants can play a fundamental role in food quality and safety. The aims of this study were to learn more about the possible influence of the meat processing environment on initial fresh meat contamination and to investigate the differences between small-scale retail distribution (SD) and large-scale retail distribution (LD) facilities. Samples were collected from butcheries (n = 20), including LD (n = 10) and SD (n = 10) facilities, over two sampling campaigns. Samples included fresh beef and pork cuts and swab samples from the knife, the chopping board, and the butcher's hand. The microbiota of both meat samples and environmental swabs were very complex, including more than 800 operational taxonomic units (OTUs) collapsed at the species level. The 16S rRNA sequencing analysis showed that core microbiota were shared by 80% of the samples and included Pseudomonas spp., Streptococcus spp., Brochothrix spp., Psychrobacter spp., and Acinetobacter spp. Hierarchical clustering of the samples based on the microbiota showed a certain separation between meat and environmental samples, with higher levels of Proteobacteria in meat. In particular, levels of Pseudomonas and several Enterobacteriaceae members were significantly higher in meat samples, while Brochothrix, Staphylococcus, lactic acid bacteria, and Psychrobacter prevailed in environmental swab samples. Consistent clustering was also observed when metabolic activities were considered by predictive metagenomic analysis of the samples. An increase in carbohydrate metabolism was predicted for the environmental swabs and was consistently linked to Firmicutes, while increases in pathways related to amino acid and lipid metabolism were predicted for the meat samples and were positively correlated with Proteobacteria Our results highlighted the importance of the processing environment in contributing to the initial microbial levels of meat and clearly showed that the type of retail facility (LD or SD) did not apparently affect the contamination. The study provides an in-depth description of the microbiota of meat and meat processing environments. It highlights the importance of the environment as a contamination source of spoilage bacteria, and it shows that the size of the retail facility does not affect the level and type of contamination. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Replicability and Robustness of GWAS for Behavioral Traits
Rietveld, Cornelius A.; Conley, Dalton; Eriksson, Nicholas; Esko, Tõnu; Medland, Sarah E.; Vinkhuyzen, Anna A.E.; Yang, Jian; Boardman, Jason D.; Chabris, Christopher F.; Dawes, Christopher T.; Domingue, Benjamin W.; Hinds, David A.; Johannesson, Magnus; Kiefer, Amy K.; Laibson, David; Magnusson, Patrik K. E.; Mountain, Joanna L.; Oskarsson, Sven; Rostapshova, Olga; Teumer, Alexander; Tung, Joyce Y.; Visscher, Peter M.; Benjamin, Daniel J.; Cesarini, David; Koellinger, Philipp D.
2015-01-01
A recent genome-wide association study (GWAS) of educational attainment identified three single-nucleotide polymorphisms (SNPs) that, despite their small effect sizes (each R2 ≈ 0.02%), reached genome-wide significance (p < 5×10−8) in a large discovery sample and replicated in an independent sample (p < 0.05). The study also reported associations between educational attainment and indices of SNPs called “polygenic scores.” We evaluate the robustness of these findings. Study 1 finds that all three SNPs replicate in another large (N = 34,428) independent sample. We also find that the scores remain predictive (R2 ≈ 2%) with stringent controls for stratification (Study 2) and in new within-family analyses (Study 3). Our results show that large and therefore well-powered GWASs can identify replicable genetic associations with behavioral traits. The small effect sizes of individual SNPs are likely to be a major contributing explanation for the striking contrast between our results and the disappointing replication record of most candidate gene studies. PMID:25287667
Large strain cruciform biaxial testing for FLC detection
NASA Astrophysics Data System (ADS)
Güler, Baran; Efe, Mert
2017-10-01
Selection of proper test method, specimen design and analysis method are key issues for studying formability of sheet metals and detection of their forming limit curves (FLC). Materials with complex microstructures may need an additional micro-mechanical investigation and accurate modelling. Cruciform biaxial test stands as an alternative to standard tests as it achieves frictionless, in-plane, multi-axial stress states with a single sample geometry. In this study, we introduce a small-scale (less than 10 cm) cruciform sample allowing micro-mechanical investigation at stress states ranging from plane strain to equibiaxial. With successful specimen design and surface finish, large forming limit strains are obtained at the test region of the sample. The large forming limit strains obtained by experiments are compared to the values obtained from Marciniak-Kuczynski (M-K) local necking model and Cockroft-Latham damage model. This comparison shows that the experimental limiting strains are beyond the theoretical values, approaching to the fracture strain of the two test materials: Al-6061-T6 aluminum alloy and DC-04 high formability steel.
Cooperative investigation of precision and accuracy: In chemical analysis of silicate rocks
Schlecht, W.G.
1951-01-01
This is the preliminary report of the first extensive program ever organized to study the analysis of igneous rocks, a study sponsored by the United States Geological Survey, the Massachusetts Institute of Technology, and the Geophysical Laboratory of the Carnegie Institution of Washington. Large samples of two typical igneous rocks, a granite and a diabase, were carefully prepared and divided. Small samples (about 70 grams) of each were sent to 25 rock-analysis laboratories throughout the world; analyses of one or both samples were reported by 34 analysts in these laboratories. The results, which showed rather large discrepancies, are presented in histograms. The great discordance in results reflects the present unsatisfactory state of rock analysis. It is hoped that the ultimate establishment of standard samples and procedures will contribute to the improvement of quality of analyses. The two rock samples have also been thoroughly studied spectrographically and petrographically. Detailed reports of all the studies will be published.
Calculating p-values and their significances with the Energy Test for large datasets
NASA Astrophysics Data System (ADS)
Barter, W.; Burr, C.; Parkes, C.
2018-04-01
The energy test method is a multi-dimensional test of whether two samples are consistent with arising from the same underlying population, through the calculation of a single test statistic (called the T-value). The method has recently been used in particle physics to search for samples that differ due to CP violation. The generalised extreme value function has previously been used to describe the distribution of T-values under the null hypothesis that the two samples are drawn from the same underlying population. We show that, in a simple test case, the distribution is not sufficiently well described by the generalised extreme value function. We present a new method, where the distribution of T-values under the null hypothesis when comparing two large samples can be found by scaling the distribution found when comparing small samples drawn from the same population. This method can then be used to quickly calculate the p-values associated with the results of the test.
Novikov, I; Fund, N; Freedman, L S
2010-01-15
Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.
DAQ application of PC oscilloscope for chaos fiber-optic fence system based on LabVIEW
NASA Astrophysics Data System (ADS)
Lu, Manman; Fang, Nian; Wang, Lutang; Huang, Zhaoming; Sun, Xiaofei
2011-12-01
In order to obtain simultaneously high sample rate and large buffer in data acquisition (DAQ) for a chaos fiber-optic fence system, we developed a double-channel high-speed DAQ application of a digital oscilloscope of PicoScope 5203 based on LabVIEW. We accomplished it by creating call library function (CLF) nodes to call the DAQ functions in the two dynamic link libraries (DLLs) of PS5000.dll and PS5000wrap.dll provided by Pico Technology Company. The maximum real-time sample rate of the DAQ application can reach 1GS/s. We can control the resolutions of the application at the sample time and data amplitudes by changing their units in the block diagram, and also control the start and end times of the sampling operations. The experimental results show that the application has enough high sample rate and large buffer to meet the demanding DAQ requirements of the chaos fiber-optic fence system.
The variation of corrosion potential with time for coated metal surfaces
NASA Technical Reports Server (NTRS)
Danford, M. D.; Knockemus, W. W.
1986-01-01
The variation of corrosion potential (EsubCORR) with time has been measured for 4130 steel coated with a preservative compound and for primer coated 2219-T87 aluminum. The data for coated steel samples show a great deal of scatter, and a smoothing procedure has been developed to enable proper interpretation of the data. The EsubCORR-time curves for coated steel exhibit a maximum, in agreement with the results of previous studies, where the data were the average of those for a large number of samples, while the present data were obtained from a single sample. In contrast, the EsubCORR-time curves for primer coated 2219-T87 aluminum samples show no significant variations, although considerable activity is indicated by the resistance-time and corrosion rate-time curves.
NASA Astrophysics Data System (ADS)
Hamidi, S. M.; Behjati, S.
2018-02-01
Here we introduce large area plasmonic touching triangular dimers by angle controlled colloidal nanolithography to use them as an efficient multi channel absorber and also high figure of merit sensors. For this purpose, we coated gold thin films onto nanometric and also micrometric polystyrene hexagonal closed packed masks in different deposition angles and also diverse substrate polar angles. Our prepared samples, after remove masks, show large area touching triangular pattern with different inter particle distances in greater polar angles. To get more sense about optical response of the samples such as transmittance and also electric field distribution, we use finite difference time domain method in simulation part. The transmittance plot shows one narrow or multi-channel adjustable deep depend on inter-particle distances which can be controlled by azimuthally angle in nano lithography process. Also, due to the isoelliptical points in the transmittance spectra; we can see the bright and dark plasmon modes coupling and thus the Fano like resonance takes place in the optical spectral region which is very useful for refractive index measurement.
Self-Esteem Development across the Life Span: A Longitudinal Study with a Large Sample from Germany
ERIC Educational Resources Information Center
Orth, Ulrich; Maes, Jürgen; Schmitt, Manfred
2015-01-01
The authors examined the development of self-esteem across the life span. Data came from a German longitudinal study with 3 assessments across 4 years of a sample of 2,509 individuals ages 14 to 89 years. The self-esteem measure used showed strong measurement invariance across assessments and birth cohorts. Latent growth curve analyses indicated…
ERIC Educational Resources Information Center
Mattern, Krista D.; Patterson, Brian F.
2011-01-01
This report presents the findings from a replication of the analyses from the report, "Is Performance on the SAT Related to College Retention?" (Mattern & Patterson, 2009). The tables presented herein are based on the 2007 sample and the findings are largely the same as those presented in the original report, and show SAT scores are…
NASA Astrophysics Data System (ADS)
Schreiber, G. A.; Leffke, A.; Mager, M.; Helle, N.; Bögl, K. W.
1994-11-01
Forty-nine pepper samples were taken from retail food stores of different cities in Germany. Most of the black and all white pepper samples showed high viscosity values after jellification in alkaline solution. After irradiation with a γ-ray dose of 6 kGy, viscosity was largely reduced in each case. Some black pepper samples showed a low viscosity level already before irradiation. However, thermoluminescence analysis did not reveal any sign for irradiation treatment prior to examination. Furthermore, the low viscosity level of these samples could not be correlated with a low starch content. It is concluded that the viscosity levels of irradiated white pepper samples clearly reveal high dose irradiation treatment. In case of black peppers it is judged that the method can be used to screen for irradiated samples since it is fast, easy and cheap. However, a positive result should be confirmed by another technique, e.g. thermoluminescence.
Large-area synthesis of high-quality and uniform monolayer WS2 on reusable Au foils
Gao, Yang; Liu, Zhibo; Sun, Dong-Ming; Huang, Le; Ma, Lai-Peng; Yin, Li-Chang; Ma, Teng; Zhang, Zhiyong; Ma, Xiu-Liang; Peng, Lian-Mao; Cheng, Hui-Ming; Ren, Wencai
2015-01-01
Large-area monolayer WS2 is a desirable material for applications in next-generation electronics and optoelectronics. However, the chemical vapour deposition (CVD) with rigid and inert substrates for large-area sample growth suffers from a non-uniform number of layers, small domain size and many defects, and is not compatible with the fabrication process of flexible devices. Here we report the self-limited catalytic surface growth of uniform monolayer WS2 single crystals of millimetre size and large-area films by ambient-pressure CVD on Au. The weak interaction between the WS2 and Au enables the intact transfer of the monolayers to arbitrary substrates using the electrochemical bubbling method without sacrificing Au. The WS2 shows high crystal quality and optical and electrical properties comparable or superior to mechanically exfoliated samples. We also demonstrate the roll-to-roll/bubbling production of large-area flexible films of uniform monolayer, double-layer WS2 and WS2/graphene heterostructures, and batch fabrication of large-area flexible monolayer WS2 film transistor arrays. PMID:26450174
Large-Angular-Scale Clustering as a Clue to the Source of UHECRs
NASA Astrophysics Data System (ADS)
Berlind, Andreas A.; Farrar, Glennys R.
We explore what can be learned about the sources of UHECRs from their large-angular-scale clustering (referred to as their "bias" by the cosmology community). Exploiting the clustering on large scales has the advantage over small-scale correlations of being insensitive to uncertainties in source direction from magnetic smearing or measurement error. In a Cold Dark Matter cosmology, the amplitude of large-scale clustering depends on the mass of the system, with more massive systems such as galaxy clusters clustering more strongly than less massive systems such as ordinary galaxies or AGN. Therefore, studying the large-scale clustering of UHECRs can help determine a mass scale for their sources, given the assumption that their redshift depth is as expected from the GZK cutoff. We investigate the constraining power of a given UHECR sample as a function of its cutoff energy and number of events. We show that current and future samples should be able to distinguish between the cases of their sources being galaxy clusters, ordinary galaxies, or sources that are uncorrelated with the large-scale structure of the universe.
Implicit Social Biases in People with Autism
Birmingham, Elina; Stanley, Damian; Nair, Remya; Adolphs, Ralph
2015-01-01
Implicit social biases are ubiquitous and are known to influence social behavior. A core diagnostic criterion of Autism Spectrum Disorder (ASD) is abnormal social behavior. Here we investigated the extent to which individuals with ASD might show a specific attenuation of implicit social biases, using the Implicit Association Test (IAT) across Social (gender, race) and Nonsocial (flowers/insect, shoes) categories. High-functioning adults with ASD showed intact but reduced IAT effects relative to healthy controls. Importantly, we observed no selective attenuation of implicit social (vs. nonsocial) biases in our ASD population. To extend these results, we collected data from a large online sample of the general population, and explored correlations between autistic traits and IAT effects. No associations were found between autistic traits and IAT effects for any of the categories tested in our online sample. Taken together, these results suggest that implicit social biases, as measured by the IAT, are largely intact in ASD. PMID:26386014
Large magnetoresistance in antiferromagnetic CaMnO3-δ
NASA Astrophysics Data System (ADS)
Zeng, Z.; Greenblatt, M.; Croft, M.
1999-04-01
CaMnO3-δ with δ=0, 0.06, and 0.11 was prepared by the Pechini citrate gel process at 1100 °C. Oxygen defects were created by quenching the sample from high temperature. Chemical analysis and x-ray absorption show that the formal valence of Mn in CaMnO3 is close to 4+, and that Mn(III) is created in the quenched samples. Moreover the x-ray absorption near-edge spectra results support the creation of two Mn(III) five coordinate sites for each O vacancy. CaMnO3-δ (δ=0-0.11) are n-type semiconductors and order antiferromagnatically with Néel temperatures close to 125 K. The activation energy decreases with increasing δ. A relatively large (~40%) negative magnetoresistance (MR) is observed for CaMnO2.89. This result shows that a substantial MR can occur in these G-type antiferromagnetic materials.
Jahandar Lashaki, Masoud; Ziaei-Azad, Hessam; Sayari, Abdelhamid
2017-10-23
The hydrothermal stability of triamine-grafted, large-pore SBA-15 CO 2 adsorbents was studied by using steam stripping. Following two 3 h cycles of steam regeneration, lower CO 2 uptakes, lower CO 2 /N ratios, and slower adsorption kinetics were observed relative to fresh samples, particularly at the lowest adsorption temperature (25 °C). CO 2 adsorption measurements for a selected sample exposed to 48 h of steam stripping depicted that after the initial loss during the first exposure to steam (3-6 h), the adsorptive properties stabilized. For higher adsorption temperatures (i.e., 50 and 75 °C), however, all adsorptive properties remained almost unchanged after steaming, indicating the significance of diffusional limitations. Thermogravimetric analysis and FTIR spectroscopy on grafted samples before and after steam stripping showed no amine leaching and no change in the chemical nature of the amine groups, respectively. Also, a six-cycle CO 2 adsorption/desorption experiment under dry conditions showed no thermal degradation. However, N 2 adsorption measurement at 77 K showed significant reductions in the BET surface area of the grafted samples following steaming. Based on the pore size distribution of calcined, grafted samples before and after steaming, it is proposed that exposure to steam restructured the grafted materials, causing mass transfer resistance. It is inferred that triamine-grafted, large-pore SBA-15 adsorbents are potential candidates for CO 2 capture at relatively high temperatures (50-75 °C; for example, flue gas) combined with steam regeneration. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Evaluating information content of SNPs for sample-tagging in re-sequencing projects.
Hu, Hao; Liu, Xiang; Jin, Wenfei; Hilger Ropers, H; Wienker, Thomas F
2015-05-15
Sample-tagging is designed for identification of accidental sample mix-up, which is a major issue in re-sequencing studies. In this work, we develop a model to measure the information content of SNPs, so that we can optimize a panel of SNPs that approach the maximal information for discrimination. The analysis shows that as low as 60 optimized SNPs can differentiate the individuals in a population as large as the present world, and only 30 optimized SNPs are in practice sufficient in labeling up to 100 thousand individuals. In the simulated populations of 100 thousand individuals, the average Hamming distances, generated by the optimized set of 30 SNPs are larger than 18, and the duality frequency, is lower than 1 in 10 thousand. This strategy of sample discrimination is proved robust in large sample size and different datasets. The optimized sets of SNPs are designed for Whole Exome Sequencing, and a program is provided for SNP selection, allowing for customized SNP numbers and interested genes. The sample-tagging plan based on this framework will improve re-sequencing projects in terms of reliability and cost-effectiveness.
Statistical inference involving binomial and negative binomial parameters.
García-Pérez, Miguel A; Núñez-Antón, Vicente
2009-05-01
Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.
Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization
Glaser, Joshua I.; Zamft, Bradley M.; Church, George M.; Kording, Konrad P.
2015-01-01
Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, “puzzle imaging,” that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples. PMID:26192446
Dynamical properties of the brain tissue under oscillatory shear stresses at large strain range
NASA Astrophysics Data System (ADS)
Boudjema, F.; Khelidj, B.; Lounis, M.
2017-01-01
In this experimental work, we study the viscoelastic behaviour of in vitro brain tissue, particularly the white matter, under oscillatory shear strain. The selective vulnerability of this tissue is the anisotropic mechanical properties of theirs different regions lead to a sensitivity to the angular shear rate and magnitude of strain. For this aim, shear storage modulus (G‧) and loss modulus (G″) were measured over a range of frequencies (1 to 100 Hz), for different levels of strain (1 %, to 50 %). The mechanical responses of the brain matter samples showed a viscoelastic behaviour that depend on the correlated strain level and frequency range and old age sample. The samples have been showed evolution behaviour by increasing then decreasing the strain level. Also, the stiffness anisotropy of brain matter was showed between regions and species.
David, Victor; Galaon, Toma; Aboul-Enein, Hassan Y
2014-01-03
Recent studies showed that injection of large volume of hydrophobic solvents used as sample diluents could be applied in reversed-phase liquid chromatography (RP-LC). This study reports a systematic research focused on the influence of a series of aliphatic alcohols (from methanol to 1-octanol) on the retention process in RP-LC, when large volumes of sample are injected on the column. Several model analytes with low hydrophobic character were studied by RP-LC process, for mobile phases containing methanol or acetonitrile as organic modifiers in different proportions with aqueous component. It was found that starting with 1-butanol, the aliphatic alcohols can be used as sample solvents and they can be injected in high volumes, but they may influence the retention factor and peak shape of the dissolved solutes. The dependence of the retention factor of the studied analytes on the injection volume of these alcohols is linear, with a decrease of its value as the sample volume is increased. The retention process in case of injecting up to 200μL of upper alcohols is dependent also on the content of the organic modifier (methanol or acetonitrile) in mobile phase. Copyright © 2013 Elsevier B.V. All rights reserved.
High-speed adaptive contact-mode atomic force microscopy imaging with near-minimum-force
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Juan; Zou, Qingze, E-mail: qzzou@rci.rutgers.edu
In this paper, an adaptive contact-mode imaging approach is proposed to replace the traditional contact-mode imaging by addressing the major concerns in both the speed and the force exerted to the sample. The speed of the traditional contact-mode imaging is largely limited by the need to maintain precision tracking of the sample topography over the entire imaged sample surface, while large image distortion and excessive probe-sample interaction force occur during high-speed imaging. In this work, first, the image distortion caused by the topography tracking error is accounted for in the topography quantification. Second, the quantified sample topography is utilized inmore » a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line topography is integrated to the topography feeedback loop to enhance the sample topography tracking. The proposed approach is demonstrated and evaluated through imaging a calibration sample of square pitches at both high speeds (e.g., scan rate of 75 Hz and 130 Hz) and large sizes (e.g., scan size of 30 μm and 80 μm). The experimental results show that compared to the traditional constant-force contact-mode imaging, the imaging speed can be increased by over 30 folds (with the scanning speed at 13 mm/s), and the probe-sample interaction force can be reduced by more than 15% while maintaining the same image quality.« less
High-speed adaptive contact-mode atomic force microscopy imaging with near-minimum-force.
Ren, Juan; Zou, Qingze
2014-07-01
In this paper, an adaptive contact-mode imaging approach is proposed to replace the traditional contact-mode imaging by addressing the major concerns in both the speed and the force exerted to the sample. The speed of the traditional contact-mode imaging is largely limited by the need to maintain precision tracking of the sample topography over the entire imaged sample surface, while large image distortion and excessive probe-sample interaction force occur during high-speed imaging. In this work, first, the image distortion caused by the topography tracking error is accounted for in the topography quantification. Second, the quantified sample topography is utilized in a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line topography is integrated to the topography feeedback loop to enhance the sample topography tracking. The proposed approach is demonstrated and evaluated through imaging a calibration sample of square pitches at both high speeds (e.g., scan rate of 75 Hz and 130 Hz) and large sizes (e.g., scan size of 30 μm and 80 μm). The experimental results show that compared to the traditional constant-force contact-mode imaging, the imaging speed can be increased by over 30 folds (with the scanning speed at 13 mm/s), and the probe-sample interaction force can be reduced by more than 15% while maintaining the same image quality.
Analysing home-ownership of couples: the effect of selecting couples at the time of the survey.
Mulder, C H
1996-09-01
"The analysis of events encountered by couple and family households may suffer from sample selection bias when data are restricted to couples existing at the moment of interview. The paper discusses the effect of sample selection bias on event history analyses of buying a home [in the Netherlands] by comparing analyses performed on a sample of existing couples with analyses of a more complete sample including past as well as current partner relationships. The results show that, although home-buying in relationships that have ended differs clearly from behaviour in existing relationships, sample selection bias is not alarmingly large." (SUMMARY IN FRE) excerpt
Cyclic Oxidation of FeCrAlY/Al2O3 Composites
NASA Technical Reports Server (NTRS)
Nesbitt, James A.; Draper, Susan L.; Barrett, Charles A.
1999-01-01
Three-ply FeCrAlY/Al2O3 composites and FeCrAlY matrix-only samples were cyclically oxidized at 1000 C and 1100 C for up to 1000 1-hr cycles. Fiber ends were exposed at the ends of the composite samples. Following cyclic oxidation, cracks running parallel to and perpendicular to the fibers were observed on the large surface of the composite. In addition, there was evidence of increased scale damage and spallation around the exposed fiber ends, particularly around the middle ply fibers. This damage was more pronounced at the higher temperature. The exposed fiber ends showed cracking between fibers in the outer plies, occasionally with Fe and Cr-rich oxides growing out of the cracks. Large gaps developed at the fiber/matrix interface around many of the fibers, especially those in the outer plies. Oxygen penetrated many of these gaps resulting in significant oxide formation at the fiber/matrix interface far within the composite sample. Around several fibers, the matrix was also internally oxidized showing Al2O3 precipitates in a radial band around the fibers. The results show that these composites have poor cyclic oxidation resistance due to the CTE mismatch and inadequate fiber/matrix bond strength at temperatures of 1000 C and above.
Eichelsheim, Veroni I; Buist, Kirsten L; Deković, Maja; Wissink, Inge B; Frijns, Tom; van Lier, Pol A C; Koot, Hans M; Meeus, Wim H J
2010-03-01
The aim of the present study is to examine whether the patterns of association between the quality of the parent-adolescent relationship on the one hand, and aggression and delinquency on the other hand, are the same for boys and girls of Dutch and Moroccan origin living in the Netherlands. Since inconsistent results have been found previously, the present study tests the replicability of the model of associations in two different Dutch samples of adolescents. Study 1 included 288 adolescents (M age = 14.9, range 12-17 years) all attending lower secondary education. Study 2 included 306 adolescents (M age = 13.2, range = 12-15 years) who were part of a larger community sample with oversampling of at risk adolescents. Multigroup structural analyses showed that neither in Study 1 nor in Study 2 ethnic or gender differences were found in the patterns of associations between support, autonomy, disclosure, and negativity in the parent-adolescent relationship and aggression and delinquency. The patterns were largely similar for both studies. Mainly negative quality of the relationship in both studies was found to be strongly related to both aggression and delinquency. Results show that family processes that affect adolescent development, show a large degree of universality across gender and ethnicity.
Comparing Ancient DNA Preservation in Petrous Bone and Tooth Cementum
Margaryan, Ashot; Stenderup, Jesper; Lynnerup, Niels; Willerslev, Eske; Allentoft, Morten E.
2017-01-01
Large-scale genomic analyses of ancient human populations have become feasible partly due to refined sampling methods. The inner part of petrous bones and the cementum layer in teeth roots are currently recognized as the best substrates for such research. We present a comparative analysis of DNA preservation in these two substrates obtained from the same human skulls, across a range of different ages and preservation environments. Both substrates display significantly higher endogenous DNA content (average of 16.4% and 40.0% for teeth and petrous bones, respectively) than parietal skull bone (average of 2.2%). Despite sample-to-sample variation, petrous bone overall performs better than tooth cementum (p = 0.001). This difference, however, is driven largely by a cluster of viking skeletons from one particular locality, showing relatively poor molecular tooth preservation (<10% endogenous DNA). In the remaining skeletons there is no systematic difference between the two substrates. A crude preservation (good/bad) applied to each sample prior to DNA-extraction predicted the above/below 10% endogenous DNA threshold in 80% of the cases. Interestingly, we observe signficantly higher levels of cytosine to thymine deamination damage and lower proportions of mitochondrial/nuclear DNA in petrous bone compared to tooth cementum. Lastly, we show that petrous bones from ancient cremated individuals contain no measurable levels of authentic human DNA. Based on these findings we discuss the pros and cons of sampling the different elements. PMID:28129388
NASA Astrophysics Data System (ADS)
Cao, Feidao; Zhao, Huaici; Liu, Pengfei
2017-11-01
Generative adversarial networks (GANs) has achieved success in many fields. However, there are some samples generated by many GAN-based works, whose structure is ambiguous. In this work, we propose Structure Guided GANs that introduce structural similar into GANs to overcome the problem. In order to achieve our goal, we introduce an encoder and a decoder into a generator to design a new generator and take real samples as part of the input of a generator. And we modify the loss function of the generator accordingly. By comparison with WGAN, experimental results show that our proposed method overcomes largely sample structure ambiguous and can generate higher quality samples.
Effects of core retrieval, handling, and preservation on hydrate-bearing samples
NASA Astrophysics Data System (ADS)
Kneafsey, T. J.; Lu, H.; Winters, W. J.; Hunter, R. B.
2009-12-01
Recovery, preservation, storage, and transport of samples containing natural gas hydrate cause changes in the stress conditions, temperature, pressure, and hydrate saturation of samples. Sample handling at the ground surface and sample preservation, either by freezing in liquid nitrogen (LN) or repressurization using methane, provides additional time and driving forces for sample alteration. The extent to which these disturbances alter the properties of the hydrate bearing sediments (HBS) depend on specific sample handling techniques, as well as on the sample itself. HBS recovered during India’s National Gas Hydrate Program (NGHP) Expedition 01 and the 2007 BP Exploration Alaska - Department of Energy - U.S. Geological Survey (BP-DOE-USGS) Mount Elbert (ME) gas hydrate well on the Alaskan North Slope provide comparisons of sample alterations induced by multiple handling techniques. HBS samples from the NGHP and the ME projects were examined using x-ray computed tomography. Mount Elbert sand samples initially preserved in LN have non-uniform short “crack-like” low-density zones in the center that probably do not extend to the outside perimeter. Samples initially preserved by repressurization show fewer “crack-like” features and higher densities. Two samples were analyzed in detail by Lu and coworkers showing reduced hydrate saturations approaching the outer surface, while substantial hydrate remained in the central region. Non-pressure cored NGHP samples show relatively large altered regions approaching the core surface, while pressure-cored-liquid-nitrogen preserved samples have much less alteration.
Predicting sample lifetimes in creep fracture of heterogeneous materials
NASA Astrophysics Data System (ADS)
Koivisto, Juha; Ovaska, Markus; Miksic, Amandine; Laurson, Lasse; Alava, Mikko J.
2016-08-01
Materials flow—under creep or constant loads—and, finally, fail. The prediction of sample lifetimes is an important and highly challenging problem because of the inherently heterogeneous nature of most materials that results in large sample-to-sample lifetime fluctuations, even under the same conditions. We study creep deformation of paper sheets as one heterogeneous material and thus show how to predict lifetimes of individual samples by exploiting the "universal" features in the sample-inherent creep curves, particularly the passage to an accelerating creep rate. Using simulations of a viscoelastic fiber bundle model, we illustrate how deformation localization controls the shape of the creep curve and thus the degree of lifetime predictability.
Processing and performance of self-healing materials
NASA Astrophysics Data System (ADS)
Tan, P. S.; Zhang, M. Q.; Bhattacharyya, D.
2009-08-01
Two self-healing methods were implemented into composite materials with self-healing capabilities, using hollow glass fibres (HGF) and microencapsulated epoxy resin with mercaptan as the hardener. For the HGF approach, two perpendicular layers of HGF were put into an E-glass/epoxy composite, and were filled with coloured epoxy resin and hardener. The HGF samples had a novel ball indentation test method done on them. The samples were analysed using micro-CT scanning, confocal microscopy and penetrant dye. Micro-CT and confocal microscopy produced limited success, but their viability was established. Penetrant dye images showed resin obstructing flow of dye through damage regions, suggesting infiltration of resin into cracks. Three-point bend tests showed that overall performance could be affected by the flaws arising from embedding HGF in the material. For the microcapsule approach, samples were prepared for novel double-torsion tests used to generate large cracks. The samples were compared with pure resin samples by analysing them using photoelastic imaging and scanning electron microscope (SEM) on crack surfaces. Photoelastic imaging established the consolidation of cracks while SEM showed a wide spread of microcapsules with their distribution being affected by gravity. Further double-torsion testing showed that healing recovered approximately 24% of material strength.
Zhang, Qi; Yang, Xiong; Hu, Qinglei; Bai, Ke; Yin, Fangfang; Li, Ning; Gang, Yadong; Wang, Xiaojun; Zeng, Shaoqun
2017-01-01
To resolve fine structures of biological systems like neurons, it is required to realize microscopic imaging with sufficient spatial resolution in three dimensional systems. With regular optical imaging systems, high lateral resolution is accessible while high axial resolution is hard to achieve in a large volume. We introduce an imaging system for high 3D resolution fluorescence imaging of large volume tissues. Selective plane illumination was adopted to provide high axial resolution. A scientific CMOS working in sub-array mode kept the imaging area in the sample surface, which restrained the adverse effect of aberrations caused by inclined illumination. Plastic embedding and precise mechanical sectioning extended the axial range and eliminated distortion during the whole imaging process. The combination of these techniques enabled 3D high resolution imaging of large tissues. Fluorescent bead imaging showed resolutions of 0.59 μm, 0.47μm, and 0.59 μm in the x, y, and z directions, respectively. Data acquired from the volume sample of brain tissue demonstrated the applicability of this imaging system. Imaging of different depths showed uniform performance where details could be recognized in either the near-soma area or terminal area, and fine structures of neurons could be seen in both the xy and xz sections. PMID:29296503
NASA Astrophysics Data System (ADS)
Ferrari, Ulisse
A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).
Tillmann, J; Ashwood, K; Absoud, M; Bölte, S; Bonnet-Brilhault, F; Buitelaar, J K; Calderoni, S; Calvo, R; Canal-Bedia, R; Canitano, R; De Bildt, A; Gomot, M; Hoekstra, P J; Kaale, A; McConachie, H; Murphy, D G; Narzisi, A; Oosterling, I; Pejovic-Milovancevic, M; Persico, A M; Puig, O; Roeyers, H; Rommelse, N; Sacco, R; Scandurra, V; Stanfield, A C; Zander, E; Charman, T
2018-02-21
Research on sex-related differences in Autism Spectrum Disorder (ASD) has been impeded by small samples. We pooled 28 datasets from 18 sites across nine European countries to examine sex differences in the ASD phenotype on the ADI-R (376 females, 1763 males) and ADOS (233 females, 1187 males). On the ADI-R, early childhood restricted and repetitive behaviours were lower in females than males, alongside comparable levels of social interaction and communication difficulties in females and males. Current ADI-R and ADOS scores showed no sex differences for ASD severity. There were lower socio-communicative symptoms in older compared to younger individuals. This large European ASD sample adds to the literature on sex and age variations of ASD symptomatology.
Size dependent exchange bias in single-phase Zn0.3Ni0.7Fe2O4 ferrite nanoparticles
NASA Astrophysics Data System (ADS)
Mohan, Rajendra; Ghosh, Mritunjoy Prasad; Mukherjee, Samrat
2018-07-01
We report the microstructural and magnetic characterization of single phase nanocrystalline partially inverted Zn0.3Ni0.7Fe2O4 mixed spinel ferrite. The samples were annealed at 200 °C, 400 °C, 600 °C, 800 °C and 1000 °C. X-ray diffraction results indicate phase purity of all the samples and application of Debye- Scherrer yielded a crystallite size variation from 5 nm to 33 nm for the different samples. Magnetic measurements have revealed the freezing of interfacial spins which were the cause of the large horizontal M-H loop shift causing large exchange bias with high anisotropy. The magnetic measurements show a hysteresis loop with high effective anisotropy constant due to highly magnetically disordered surface spin at 5 K.
Cr3+ and Nb5+ co-doped Ti2Nb10O29 materials for high-performance lithium-ion storage
NASA Astrophysics Data System (ADS)
Yang, Chao; Yu, Shu; Ma, Yu; Lin, Chunfu; Xu, Zhihao; Zhao, Hua; Wu, Shunqing; Zheng, Peng; Zhu, Zi-Zhong; Li, Jianbao; Wang, Ning
2017-08-01
Ti2Nb10O29 is an advanced anode material for lithium-ion batteries due to its large specific capacity and high safety. However, its poor electronic/ionic conductivity significantly limits its rate capability. To tackle this issue, a Cr3+-Nb5+ co-doping is employed, and a series of CrxTi2-2xNb10+xO29 compounds are prepared. The co-doping does not change the Wadsley-Roth shear structure but increases the unit-cell volume and decreases the particle size. Due to the increased unit-cell volumes, the co-doped samples show increased Li+-ion diffusion coefficients. Experimental data and first-principle calculations reveal significantly increased electronic conductivities arising from the formation of impurity bands after the co-doping. The improvements of the electronic/ionic conductivities and the smaller particle sizes in the co-doped samples significantly contribute to improving their electrochemical properties. During the first cycle at 0.1 C, the optimized Cr0.6Ti0.8Nb10.6O29 sample delivers a large reversible capacity of 322 mAh g-1 with a large first-cycle Coulombic efficiency of 94.7%. At 10 C, it retains a large capacity of 206 mAh g-1, while that of Ti2Nb10O29 is only 80 mAh g-1. Furthermore, Cr0.6Ti0.8Nb10.6O29 shows high cyclic stability as demonstrated in over 500 cycles at 10 C with tiny capacity loss of only 0.01% per cycle.
A summary of phase analysis on Apollo 14 samples
NASA Technical Reports Server (NTRS)
Fredriksson, K.; Nelen, J.; Noonan, A.; Kraut, F.
1971-01-01
The results of an analysis of lunar samples from Apollo 14 are presented. The large number of breccias returned from the Fra Mauro formation show that impact events are an important rock forming mechanism on the moon. Larger rocks as well as micro breccias bear structural resemblance to brecciated chondrites and terrestrial impactites. Many show evidence of repetitious events of break-up and accumulation welding. The surface of the regolith has become thoroughly mixed by this process. Most components however appear locally derived from basalts rich in feldspar and Kreep components, similar to rocks such as 14310.
A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.
Bord, Séverine; Bioche, Christèle; Druilhet, Pierre
2018-05-01
We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Davis, T M; Parsons, C M; Utterback, P L; Kirstein, D
2015-05-01
Sixteen meat and bone meal (MBM) samples were obtained and selected from various company plants to provide a wide range in pepsin nitrogen digestibility values. Pepsin digestibility was determined using either 0.02 or 0.002% pepsin. Amino acid (AA) digestibility of the 16 MBM samples was then determined using a precision-fed cecectomized rooster assay. The 0.02% pepsin digestibility values were numerically higher than the 0.002% pepsin values. The values varied from 77 to 93% for 0.02% pepsin and from 67 to 91% for 0.002% pepsin. The rooster AA digestibility results showed a wide range of values among MBM samples mostly due to the 4 samples having lowest and highest AA digestibility. A precision-fed broiler chick ileal AA digestibility assay confirmed that there were large differences in AA digestibility among the MBM samples having the lowest and highest rooster digestibility values. Correlation analyses between pepsin and AA digestibility values showed that the correlation values (r) were highly significant (P < 0.0001) for all AA when all 16 MBM samples were included in the analysis. However, when the MBM samples with the 2 lowest and the 2 highest rooster digestibility values were not included in the correlation analyses, the correlation coefficient values (r) were generally very low and not significant (P > 0.05). The results indicated that the pepsin nitrogen digestibility assay is only useful for detecting large differences in AA digestibility among MBM. There also was no advantage for using 0.02 versus 0.002% pepsin. © 2015 Poultry Science Association Inc.
ERIC Educational Resources Information Center
Hunter, Barbara; MacLean, Sarah; Berends, Lynda
2012-01-01
The purpose of this paper is to show how "realist synthesis" methodology (Pawson, 2002) was adapted to review a large sample of community based projects addressing alcohol and drug use problems. Our study drew on a highly varied sample of 127 projects receiving funding from a national non-government organisation in Australia between 2002…
NASA Astrophysics Data System (ADS)
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
Laboratory-based x-ray phase-contrast tomography enables 3D virtual histology
NASA Astrophysics Data System (ADS)
Töpperwien, Mareike; Krenkel, Martin; Quade, Felix; Salditt, Tim
2016-09-01
Due to the large penetration depth and small wavelength hard x-rays offer a unique potential for 3D biomedical and biological imaging, combining capabilities of high resolution and large sample volume. However, in classical absorption-based computed tomography, soft tissue only shows a weak contrast, limiting the actual resolution. With the advent of phase-contrast methods, the much stronger phase shift induced by the sample can now be exploited. For high resolution, free space propagation behind the sample is particularly well suited to make the phase shift visible. Contrast formation is based on the self-interference of the transmitted beam, resulting in object-induced intensity modulations in the detector plane. As this method requires a sufficiently high degree of spatial coherence, it was since long perceived as a synchrotron-based imaging technique. In this contribution we show that by combination of high brightness liquid-metal jet microfocus sources and suitable sample preparation techniques, as well as optimized geometry, detection and phase retrieval, excellent three-dimensional image quality can be obtained, revealing the anatomy of a cobweb spider in high detail. This opens up new opportunities for 3D virtual histology of small organisms. Importantly, the image quality is finally augmented to a level accessible to automatic 3D segmentation.
Stellar population of the superbubble N 206 in the LMC. I. Analysis of the Of-type stars
NASA Astrophysics Data System (ADS)
Ramachandran, Varsha; Hainich, R.; Hamann, W.-R.; Oskinova, L. M.; Shenar, T.; Sander, A. A. C.; Todt, H.; Gallagher, J. S.
2018-01-01
Context. Massive stars severely influence their environment by their strong ionizing radiation and by the momentum and kinetic energy input provided by their stellar winds and supernovae. Quantitative analyses of massive stars are required to understand how their feedback creates and shapes large scale structures of the interstellar medium. The giant H II region N 206 in the Large Magellanic Cloud contains an OB association that powers a superbubble filled with hot X-ray emitting gas, serving as an ideal laboratory in this context. Aims: We aim to estimate stellar and wind parameters of all OB stars in N 206 by means of quantitative spectroscopic analyses. In this first paper, we focus on the nine Of-type stars located in this region. We determine their ionizing flux and wind mechanical energy. The analysis of nitrogen abundances in our sample probes rotational mixing. Methods: We obtained optical spectra with the multi-object spectrograph FLAMES at the ESO-VLT. When possible, the optical spectroscopy was complemented by UV spectra from the HST, IUE, and FUSE archives. Detailed spectral classifications are presented for our sample Of-type stars. For the quantitative spectroscopic analysis we used the Potsdam Wolf-Rayet model atmosphere code. We determined the physical parameters and nitrogen abundances of our sample stars by fitting synthetic spectra to the observations. Results: The stellar and wind parameters of nine Of-type stars, which are largely derived from spectral analysis are used to construct wind momentum - luminosity relationship. We find that our sample follows a relation close to the theoretical prediction, assuming clumped winds. The most massive star in the N 206 association is an Of supergiant that has a very high mass-loss rate. Two objects in our sample reveal composite spectra, showing that the Of primaries have companions of late O subtype. All stars in our sample have an evolutionary age of less than 4 million yr, with the O2-type star being the youngest. All these stars show a systematic discrepancy between evolutionary and spectroscopic masses. All stars in our sample are nitrogen enriched. Nitrogen enrichment shows a clear correlation with increasing projected rotational velocities. Conclusions: The mechanical energy input from the Of stars alone is comparable to the energy stored in the N 206 superbubble as measured from the observed X-ray and Hα emission.
Enantiomerically enriched, polycrystalline molecular sieves
Brand, Stephen K.; Schmidt, Joel E.; Deem, Michael W.; ...
2017-05-01
Zeolite and zeolite-like molecular sieves are being used in a large number of applications such as adsorption and catalysis. Achievement of the long-standing goal of creating a chiral, polycrystalline molecular sieve with bulk enantioenrichment would enable these materials to perform enantioselective functions. Here, we report the synthesis of enantiomerically enriched samples of a molecular sieve. For this study, enantiopure organic structure directing agents are designed with the assistance of computational methods and used to synthesize enantioenriched, polycrystalline molecular sieve samples of either enantiomer. Computational results correctly predicted which enantiomer is obtained, and enantiomeric enrichment is proven by high-resolution transmission electronmore » microscopy. The enantioenriched and racemic samples of the molecular sieves are tested as adsorbents and heterogeneous catalysts. The enantioenriched molecular sieves show enantioselectivity for the ring opening reaction of epoxides and enantioselective adsorption of 2-butanol (the R enantiomer of the molecular sieve shows opposite and approximately equal enantioselectivity compared with the S enantiomer of the molecular sieve, whereas the racemic sample of the molecular sieve shows no enantioselectivity).« less
Enantiomerically enriched, polycrystalline molecular sieves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brand, Stephen K.; Schmidt, Joel E.; Deem, Michael W.
Zeolite and zeolite-like molecular sieves are being used in a large number of applications such as adsorption and catalysis. Achievement of the long-standing goal of creating a chiral, polycrystalline molecular sieve with bulk enantioenrichment would enable these materials to perform enantioselective functions. Here, we report the synthesis of enantiomerically enriched samples of a molecular sieve. For this study, enantiopure organic structure directing agents are designed with the assistance of computational methods and used to synthesize enantioenriched, polycrystalline molecular sieve samples of either enantiomer. Computational results correctly predicted which enantiomer is obtained, and enantiomeric enrichment is proven by high-resolution transmission electronmore » microscopy. The enantioenriched and racemic samples of the molecular sieves are tested as adsorbents and heterogeneous catalysts. The enantioenriched molecular sieves show enantioselectivity for the ring opening reaction of epoxides and enantioselective adsorption of 2-butanol (the R enantiomer of the molecular sieve shows opposite and approximately equal enantioselectivity compared with the S enantiomer of the molecular sieve, whereas the racemic sample of the molecular sieve shows no enantioselectivity).« less
Phytoforensics—Using trees to find contamination
Wilson, Jordan L.
2017-09-28
The water we drink, air we breathe, and soil we come into contact with have the potential to adversely affect our health because of contaminants in the environment. Environmental samples can characterize the extent of potential contamination, but traditional methods for collecting water, air, and soil samples below the ground (for example, well drilling or direct-push soil sampling) are expensive and time consuming. Trees are closely connected to the subsurface and sampling tree trunks can indicate subsurface pollutants, a process called phytoforensics. Scientists at the Missouri Water Science Center were among the first to use phytoforensics to screen sites for contamination before using traditional sampling methods, to guide additional sampling, and to show the large cost savings associated with tree sampling compared to traditional methods.
Palhol, Fabien; Lamoureux, Catherine; Naulet, Norbert
2003-06-01
In this study the (15)N/(14)N isotopic ratios of 43 samples of 3,4-methylenedioxymethamphetamine (MDMA) samples were measured using Gas Chromatography-Combustion-Isotope-Ratio Mass Spectrometry (GC-C-IRMS). The results show a large discrimination between samples with a range of delta(15)N values between -16 and +19 per thousand. Comparison between delta(15)N values and other physical and chemical parameters shows a strong relationship between delta(15)N and brand logo or composition. Thus, it could be assumed that tablets from different seizures probably originated from the same clandestine manufacturing source. Hence, (15)N isotopic parameters provide an important additional tool to establish common origins between seizures of clandestine synthetic drugs.
Reduction and analysis of VLA maps for 281 radio-loud quasars using the UNLV Cray Y-MP supercomputer
NASA Technical Reports Server (NTRS)
Ding, Ailian; Hintzen, Paul; Weistrop, Donna; Owen, Frazer
1993-01-01
The identification of distorted radio-loud quasars provides a potentially very powerful tool for basic cosmological studies. If large morphological distortions are correlated with membership of the quasars in rich clusters of galaxies, optical observations can be used to identify rich clusters of galaxies at large redshifts. Hintzen, Ulvestad, and Owen (1983, HUO) undertook a VLA A array snapshot survey at 20 cm of 123 radio-loud quasars, and they found that among triple sources in their sample, 17 percent had radio axes which were bent more than 20 deg and 5 percent were bent more than 40 deg. Their subsequent optical observations showed that excess galaxy densities within 30 arcsec of 6 low-redshift distorted quasars were on average 3 times as great as those around undistorted quasars (Hintzen 1984). At least one of the distorted quasars observed, 3C275.1, apparently lies in the first-ranked galaxy at the center of a rich cluster of galaxies (Hintzen and Romanishin, 1986). Although their sample was small, these results indicated that observations of distorted quasars could be used to identify clusters of galaxies at large redshifts. The purpose of this project is to increase the available sample of distorted quasars to allow optical detection of a significant sample of quasar-associated clusters of galaxies at large redshifts.
Romppel, Matthias; Hinz, Andreas; Finck, Carolyn; Young, Jeremy; Brähler, Elmar; Glaesmer, Heide
2017-12-01
While the General Health Questionnaire, 12-item version (GHQ-12) has been widely used in cross-cultural comparisons, rigorous tests of the measurement equivalence of different language versions are still lacking. Thus, our study aims at investigating configural, metric and scalar invariance across the German and the Spanish version of the GHQ-12 in two population samples. The GHQ-12 was applied in two large-scale population-based samples in Germany (N = 1,977) and Colombia (N = 1,500). To investigate measurement equivalence, confirmatory factor analyses were conducted in both samples. In the German sample mean GHQ-12 total scores were higher than in the Colombian sample. A one-factor model including response bias on the negatively worded items showed superior fit in the German and the Colombian sample; thus both versions of the GHQ-12 showed configural invariance. Factor loadings and intercepts were not equal across both samples; thus GHQ-12 showed no metric and scalar invariance. As both versions of the GHQ-12 did not show measurement equivalence, it is not recommendable to compare both measures and to conclude that mental distress is higher in the German sample, although we do not know if the differences are attributable to measurement problems or represent a real difference in mental distress. The study underlines the importance of measurement equivalence in cross-cultural comparisons. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Singh, Dharmendra; Rao, P. Nageswara; Jayaganthan, R.
2013-08-01
The influence of rolling at liquid nitrogen temperature and annealing on the microstructure and mechanical properties of Al 5083 alloy was studied in this paper. Cryorolled samples of Al 5083 show significant improvements in strength and hardness. The ultimate tensile strength increases up to 340 MPa and 390 MPa for the 30% and 50% cryorolled samples, respectively. The cryorolled samples, with 30% and 50% reduction, were subjected to Charpy impact testing at various temperatures from -190°C to 100°C. It is observed that increasing the percentage of reduction of samples during cryorolling has significant effect on decreasing impact toughness at all temperatures by increasing yield strength and decreasing ductility. Annealing of samples after cryorolling shows remarkable increment in impact toughness through recovery and recrystallization. The average grain size of the 50% cryorolled sample (14 μm) after annealing at 350°C for 1 h is found to be finer than that of the 30% cryorolled sample (25 μm). The scanning electron microscopy (SEM) analysis of fractured surfaces shows a large-size dimpled morphology, resembling the ductile fracture mechanism in the starting material and fibrous structure with very fine dimples in cryorolled samples corresponding to the brittle fracture mechanism.
Mondol, Samrat; Navya, R; Athreya, Vidya; Sunagar, Kartik; Selvaraj, Velu Mani; Ramakrishnan, Uma
2009-12-04
Leopards are the most widely distributed of the large cats, ranging from Africa to the Russian Far East. Because of habitat fragmentation, high human population densities and the inherent adaptability of this species, they now occupy landscapes close to human settlements. As a result, they are the most common species involved in human wildlife conflict in India, necessitating their monitoring. However, their elusive nature makes such monitoring difficult. Recent advances in DNA methods along with non-invasive sampling techniques can be used to monitor populations and individuals across large landscapes including human dominated ones. In this paper, we describe a DNA-based method for leopard individual identification where we used fecal DNA samples to obtain genetic material. Further, we apply our methods to non-invasive samples collected in a human-dominated landscape to estimate the minimum number of leopards in this human-leopard conflict area in Western India. In this study, 25 of the 29 tested cross-specific microsatellite markers showed positive amplification in 37 wild-caught leopards. These loci revealed varied levels of polymorphism (four-12 alleles) and heterozygosity (0.05-0.79). Combining data on amplification success (including non-invasive samples) and locus specific polymorphisms, we showed that eight loci provide a sibling probability of identity of 0.0005, suggesting that this panel can be used to discriminate individuals in the wild. When this microsatellite panel was applied to fecal samples collected from a human-dominated landscape, we identified 7 individuals, with a sibling probability of identity of 0.001. Amplification success of field collected scats was up to 72%, and genotype error ranged from 0-7.4%. Our results demonstrated that the selected panel of eight microsatellite loci can conclusively identify leopards from various kinds of biological samples. Our methods can be used to monitor leopards over small and large landscapes to assess population trends, as well as could be tested for population assignment in forensic applications.
Khan, Rishi L; Gonye, Gregory E; Gao, Guang; Schwaber, James S
2006-01-01
Background Using microarrays by co-hybridizing two samples labeled with different dyes enables differential gene expression measurements and comparisons across slides while controlling for within-slide variability. Typically one dye produces weaker signal intensities than the other often causing signals to be undetectable. In addition, undetectable spots represent a large problem for two-color microarray designs and most arrays contain at least 40% undetectable spots even when labeled with reference samples such as Stratagene's Universal Reference RNAs™. Results We introduce a novel universal reference sample that produces strong signal for all spots on the array, increasing the average fraction of detectable spots to 97%. Maximizing detectable spots on the reference image channel also decreases the variability of microarray data allowing for reliable detection of smaller differential gene expression changes. The reference sample is derived from sequence contained in the parental EST clone vector pT7T3D-Pac and is called vector RNA (vRNA). We show that vRNA can also be used for quality control of microarray printing and PCR product quality, detection of hybridization anomalies, and simplification of spot finding and segmentation tasks. This reference sample can be made inexpensively in large quantities as a renewable resource that is consistent across experiments. Conclusion Results of this study show that vRNA provides a useful universal reference that yields high signal for almost all spots on a microarray, reduces variation and allows for comparisons between experiments and laboratories. Further, it can be used for quality control of microarray printing and PCR product quality, detection of hybridization anomalies, and simplification of spot finding and segmentation tasks. This type of reference allows for detection of small changes in differential expression while reference designs in general allow for large-scale multivariate experimental designs. vRNA in combination with reference designs enable systems biology microarray experiments of small physiologically relevant changes. PMID:16677381
2009-01-01
Background Leopards are the most widely distributed of the large cats, ranging from Africa to the Russian Far East. Because of habitat fragmentation, high human population densities and the inherent adaptability of this species, they now occupy landscapes close to human settlements. As a result, they are the most common species involved in human wildlife conflict in India, necessitating their monitoring. However, their elusive nature makes such monitoring difficult. Recent advances in DNA methods along with non-invasive sampling techniques can be used to monitor populations and individuals across large landscapes including human dominated ones. In this paper, we describe a DNA-based method for leopard individual identification where we used fecal DNA samples to obtain genetic material. Further, we apply our methods to non-invasive samples collected in a human-dominated landscape to estimate the minimum number of leopards in this human-leopard conflict area in Western India. Results In this study, 25 of the 29 tested cross-specific microsatellite markers showed positive amplification in 37 wild-caught leopards. These loci revealed varied levels of polymorphism (four-12 alleles) and heterozygosity (0.05-0.79). Combining data on amplification success (including non-invasive samples) and locus specific polymorphisms, we showed that eight loci provide a sibling probability of identity of 0.0005, suggesting that this panel can be used to discriminate individuals in the wild. When this microsatellite panel was applied to fecal samples collected from a human-dominated landscape, we identified 7 individuals, with a sibling probability of identity of 0.001. Amplification success of field collected scats was up to 72%, and genotype error ranged from 0-7.4%. Conclusion Our results demonstrated that the selected panel of eight microsatellite loci can conclusively identify leopards from various kinds of biological samples. Our methods can be used to monitor leopards over small and large landscapes to assess population trends, as well as could be tested for population assignment in forensic applications. PMID:19961605
Coluccia, Emanuele; Gamboz, Nadia; Brandimonte, Maria A
2011-12-01
The present study aimed to provide normative data on a large sample of the elderly Italian population (N = 464; range of age = 49-94; range of education = 3-25) on both the word and the picture versions of a battery of free recall, cued recall, and recognition tests of memory. Results from multiple regression analyses showed that both age and education were significant predictors of performance. Therefore, norms were calculated taking into account these demographic variables. The availability of normative data based on a large sample will allow a more reliable use of the battery for clinical assessment in Italian-speaking dementia population.
On incomplete sampling under birth-death models and connections to the sampling-based coalescent.
Stadler, Tanja
2009-11-07
The constant rate birth-death process is used as a stochastic model for many biological systems, for example phylogenies or disease transmission. As the biological data are usually not fully available, it is crucial to understand the effect of incomplete sampling. In this paper, we analyze the constant rate birth-death process with incomplete sampling. We derive the density of the bifurcation events for trees on n leaves which evolved under this birth-death-sampling process. This density is used for calculating prior distributions in Bayesian inference programs and for efficiently simulating trees. We show that the birth-death-sampling process can be interpreted as a birth-death process with reduced rates and complete sampling. This shows that joint inference of birth rate, death rate and sampling probability is not possible. The birth-death-sampling process is compared to the sampling-based population genetics model, the coalescent. It is shown that despite many similarities between these two models, the distribution of bifurcation times remains different even in the case of very large population sizes. We illustrate these findings on an Hepatitis C virus dataset from Egypt. We show that the transmission times estimates are significantly different-the widely used Gamma statistic even changes its sign from negative to positive when switching from the coalescent to the birth-death process.
Lamb, Charles Edwin; Downing, D.J.
1979-01-01
Ground-water levels in most wells did not change significantly from 1974 to 1977 in the Stovepipe Wells Hotel area, California. The average water-level decline was less than 0.10 foot between August 1974 and August 1977 in 10 observation wells. Water-level contours show a depression centered on the two pumping wells, but this depression existed before the National Park Service started pumping its well. The chemical quality of the ground water is poor. Dissolved-solids concentrations in water samples ranged from 2,730 to 6,490 milligrams per liter. Analyses of water samples from two wells showed large changes in some constituents from 1976 to 1977. Streamflow in Salt Creek has been monitored since February 1974. Base flow is seasonal, being 0.10 to 0.20 cubic foot per second during the summer and as much as three times that amount during the winter. Two chemical analyses of water from Salt Creek, representing summer and winter flow conditions, show large differences for many constituents. (Woodard-USGS)
Dielectric studies on PEG-LTMS based polymer composites
NASA Astrophysics Data System (ADS)
Patil, Ravikumar V.; Praveen, D.; Damle, R.
2018-02-01
PEG LTMS based polymer composites were prepared and studied for dielectric constant variation with frequency and temperature as a potential candidate with better dielectric properties. Solution cast technique is used for the preparation of polymer composite with five different compositions. Samples show variation in dielectric constant with frequency and temperature. Dielectric constant is large at low frequencies and higher temperatures. Samples with larger space charges have shown larger dielectric constant. The highest dielectric constant observed was about 29244 for PEG25LTMS sample at 100Hz and 312 K.
Interfacial contribution to the dielectric response in semiconducting LaBiMn4/3Co2/3O6
NASA Astrophysics Data System (ADS)
Filippi, M.; Kundys, B.; Ranjith, R.; Kundu, Asish K.; Prellier, W.
2008-05-01
Impedance measurements have been performed on a sintered polycrystalline sample of the perovskite LaBiMn4/3Co2/3O6. Colossal dielectric permittivity is often measured in this class of semiconducting materials as a result of extrinsic factors. Our results show that a large offset in the capacitance, measured on a series of samples with different thickness, is due to the interfacial polarization. This contribution can then be removed from the data, creating a general procedure for dielectric measurements in semiconducting samples.
Effect of Nd Doping on Dielectric and Impedance Properties of PZT Nanoceramics
NASA Astrophysics Data System (ADS)
Kour, P.; Pradhan, S. K.; Kumar, Pawan; Sinha, S. K.; Kar, Manoranjan
2018-02-01
Neodymium-doped lead zirconate tianate, i.e. Pb1-x Nd x Zr0.52Ti0.48O3 (PNZT) ceramics, with x = 0-10 mol.% has been prepared by the sol-gel process. X-ray diffraction pattern at room temperature shows the pyrochlore free phase for all samples. The structural analysis suggests the coexistence of both rhombohedral (R3m space group) and tetragonal (P4 mm space group) crystal symmetries. Scanning electron micrographs of the samples show uniform distribution of grain and grain boundaries. Dielectric constant increases with the increase in neodymium concentration in the crystal lattice. Degree of diffuse phase transition increases with the increase in Nd3+ concentration in the sample. Nd3+ incorporation into the lead zirconatetitanate (PZT) lattice enhances the spreading factor. Interaction between neighbouring dipoles decreases with the increase of Nd3+ in PZT lattice. The conduction mechanism of the sample can be attributed to the overlapping large polar tunnelling. Second-order dielectric phase transition has been observed at the Curie temperature. The electrical properties of the sample can be explained by considering grain and grain boundaries contributions. All the samples show the poly-dispersive non-Debye type relaxation.
Comparison of temporal trends in VOCs as measured with PDB samplers and low-flow sampling methods
Harte, P.T.
2002-01-01
Analysis of temporal trends in tetrachloroethylene (PCE) concentration determined by two sample techniques showed that passive diffusion bag (pdb) samplers adequately sample the large variation in PCE concentrations at the site. The slopes of the temporal trends in concentrations were comparable between the two techniques, and the pdb sample concentration generally reflected the instantaneous concentration sampled by the low-flow technique. Thus, the pdb samplers provided an appropriate sampling technique for PCE at these wells. One or two wells did not make the case for widespread application of pdb samples at all sites. However, application of pdb samples in some circumstances was appropriate for evaluating temporal and spatial variations in VOC concentrations, thus, should be considered as a useful tool in hydrogeology.
NASA Astrophysics Data System (ADS)
Mackey, A. D.; Gilmore, G. F.
2003-01-01
We have compiled a pseudo-snapshot data set of two-colour observations from the Hubble Space Telescope archive for a sample of 53 rich LMC clusters with ages of 106-1010 yr. We present surface brightness profiles for the entire sample, and derive structural parameters for each cluster, including core radii, and luminosity and mass estimates. Because we expect the results presented here to form the basis for several further projects, we describe in detail the data reduction and surface brightness profile construction processes, and compare our results with those of previous ground-based studies. The surface brightness profiles show a large amount of detail, including irregularities in the profiles of young clusters (such as bumps, dips and sharp shoulders), and evidence for both double clusters and post-core-collapse (PCC) clusters. In particular, we find power-law profiles in the inner regions of several candidate PCC clusters, with slopes of approximately -0.7, but showing considerable variation. We estimate that 20 +/- 7 per cent of the old cluster population of the Large Magellanic Cloud (LMC) has entered PCC evolution, a similar fraction to that for the Galactic globular cluster system. In addition, we examine the profile of R136 in detail and show that it is probably not a PCC cluster. We also observe a trend in core radius with age that has been discovered and discussed in several previous publications by different authors. Our diagram has better resolution, however, and appears to show a bifurcation at several hundred Myr. We argue that this observed relationship reflects true physical evolution in LMC clusters, with some experiencing small-scale core expansion owing to mass loss, and others large-scale expansion owing to some unidentified characteristic or physical process.
NASA Astrophysics Data System (ADS)
Sun, Chengjun; Jiang, Fenghua; Gao, Wei; Li, Xiaoyun; Yu, Yanzhen; Yin, Xiaofei; Wang, Yong; Ding, Haibing
2017-01-01
Detection of sulfur-oxidizing bacteria has largely been dependent on targeted gene sequencing technology or traditional cell cultivation, which usually takes from days to months to carry out. This clearly does not meet the requirements of analysis for time-sensitive samples and/or complicated environmental samples. Since energy-dispersive X-ray spectrometry (EDS) can be used to simultaneously detect multiple elements in a sample, including sulfur, with minimal sample treatment, this technology was applied to detect sulfur-oxidizing bacteria using their high sulfur content within the cell. This article describes the application of scanning electron microscopy imaging coupled with EDS mapping for quick detection of sulfur oxidizers in contaminated environmental water samples, with minimal sample handling. Scanning electron microscopy imaging revealed the existence of dense granules within the bacterial cells, while EDS identified large amounts of sulfur within them. EDS mapping localized the sulfur to these granules. Subsequent 16S rRNA gene sequencing showed that the bacteria detected in our samples belonged to the genus Chromatium, which are sulfur oxidizers. Thus, EDS mapping made it possible to identify sulfur oxidizers in environmental samples based on localized sulfur within their cells, within a short time (within 24 h of sampling). This technique has wide ranging applications for detection of sulfur bacteria in environmental water samples.
Quality variations in black musli (curculigo orchioides gaertn.).
Mathew, P P Joy Samuel; Savithri, K E; Skaria, Baby P; Kurien, Kochurani
2004-07-01
Black musli (Curculigo orchioides Gaertn.) one of the ayurvedic dasapushpa and a rejuvenating and aphrodisiac drug. Is on the verge of extinction and needs to be conserved and cultivated. Large variations are also observed in the quality of the crude drug available in the market. Study on the quality of C. orchioides in natural habitat, under cultivation and in trade in south India showed that there was considerable variation with biotypes and habitats. Drugs collected form the natural habitat was superior in quality to that produced by cultivation. Among the market samples collected from the various Zones of kerala, those from the High Ranges were superior in most of the quality parameters, which indicated its superiority for high quality drug formulation. Among the southern states, Tamil Nadu samples ranked next to High Range samples in this respect. There exists large variability in the market samples and there is felt-need for proper standardization of the crude drug for ensuring quality in the drug formulations.
NASA Astrophysics Data System (ADS)
Nagaoka, Hiroshi; Takeda, Hiroshi; Karouji, Yuzuru; Ohtake, Makiko; Yamaguchi, Akira; Yoneda, Shigekazu; Hasebe, Nobuyuki
2014-12-01
Remote observation by the reflectance spectrometers onboard the Japanese lunar explorer Kaguya (SELENE) showed the purest anorthosite (PAN) spots (>98% plagioclase) at some large craters. Mineralogical and petrologic investigations on the feldspathic lunar meteorites, Dhofar 489 and Dhofar 911, revealed the presence of several pure anorthosite clasts. A comparison with Apollo nearside samples of ferroan anorthosite (FAN) indicated that of the FAN samples returned by the Apollo missions, sample 60015 is the largest anorthosite with the highest plagioclase abundance and homogeneous mafic mineral compositions. These pure anorthosites (>98% plagioclase) have large chemical variations in Mg number (Mg# = molar 100 × Mg/(Mg + Fe)) of each coexisting mafic mineral. The variations imply that these pure anorthosites underwent complex formation processes and were not formed by simple flotation of plagioclase. The lunar highland samples with pure anorthosite and the PAN observed by Kaguya suggest that pure anorthosite is widely distributed as lunar crust lithology over the entire Moon.
Large- and Very-Large-Scale Motions in Katabatic Flows Over Steep Slopes
NASA Astrophysics Data System (ADS)
Giometto, M. G.; Fang, J.; Salesky, S.; Parlange, M. B.
2016-12-01
Evidence of large- and very-large-scale motions populating the boundary layer in katabatic flows over steep slopes is presented via direct numerical simulations (DNSs). DNSs are performed at a modified Reynolds number (Rem = 967), considering four sloping angles (α = 60°, 70°, 80° and 90°). Large coherent structures prove to be strongly dependent on the inclination of the underlying surface. Spectra and co-spectra consistently show signatures of large-scale motions (LSMs), with streamwise extension on the order of the boundary layer thickness. A second low-wavenumber mode characterizes pre-multiplied spectra and co-spectra when the slope angle is below 70°, indicative of very-large-scale motions (VLSMs). In addition, conditional sampling and averaging shows how LSMs and VLSMs are induced by counter-rotating roll modes, in agreement with findings from canonical wall-bounded flows. VLSMs contribute to the stream-wise velocity variance and shear stress in the above-jet regions up to 30% and 45% respectively, whereas both LSMs and VLSMs are inactive in the near-wall regions.
Xiao, Sanhua; Luo, Lan; Qiao, Qian; Lü, Xuemin; Wang, Yanhui; Feng, Lin; Tang, Fei; Wang, Haiyong; Bie, Nana; Wang, Yuehong
2017-05-01
To understand the occurrence and change of mutagencity of water samples in the process of drinking water treatment and distribution in a waterworks taking Yangtze River as its water source in Jiangsu Province. Large volume of inlet water, finished water and tap water samples were extracted by XAD-2 resin. Mutagencities were assessed by Ames test and a mutation ratio( MR) of 2 or greater was judged as a positive result. Compared with the samples with S9, samples without S9 presented more positive results( P = 0. 005). That water treatment elevated MR values( P = 0. 007) while the pipe transport made MR values down( P = 0. 038) was observed in samples without S9. The tap water showed weaker mutagenicities than the raw water in samples with S9( P = 0. 008). Compared to the raw water samples, the finished water samples showed more positive results(-S9) and lower MR values( + S9, P =0. 002). Significant mutagenicities of water samples from the Yangtze Riverand its processed water were presented, and frame shit and direct mutagens deserved special concern.
NASA Astrophysics Data System (ADS)
Bertolini, G.; Marques, J. C.; Hartley, A. J.; Scherer, C.; Macdonald, D.; Hole, M.; Stipp Basei, M. A. A. S.; Frantz, J. C.; Rosa, A. A. S.
2017-12-01
Large desert basins (>1.000.000 km²) are likely to contain sediment derived from different sources due to variations in factors such as wind direction, sand availability, and sediment influx. Provenance analysis is key to determining sediment sources and to constrain the nature of the sediment fill in desert basins. The Cretaceous Botucatu Desert dunefield extended across a large area of the interior of the SW Gondwana and was then buried by extensive lava flows that covered the active erg. The onset of volcanic activity triggered climatic and topographic variations that changed the depositional setting, however, the aeolian system remained active during this time period. Twenty samples were collected along the southern border of the basin (Brazil and Uruguay). Heavy mineral (HM) and petrographic analyses indicate very mature sediment, with a high ZTR index and quartz dominated sandstones. Despite the regularity of high ZTR index, garnet input occurs in eastern samples. Ten samples were selected for MC-LA-ICP-MS zircon dating with the aim of comparing pre- and syn-volcanic sandstones. More than 800 detrital zircons (DZ) were analyzed and the results allowed the identification of 5 relevant peaks interpreted as: 1) Choiyoi volcanism; 2) Famatian Cycle; 3) Brazilian Cycle (BC); 4) Grenvillian Cycle (GC); 5) Transamazonic Cycle. The DZ ages from the pre and syn-volcanic sandstones show no significant variation. However, when comparing the provenance between the eastern and western areas, samples from the eastern border show a major BC contribution (61%), while the western samples contain 40%. The GC contribution is more significant in the western part of the basin (>18%), contrasting with 6% in eastern samples. The main conclusions are: 1) the DZ record reveals a distinct signature for sedimentary sources; 2) climatic and topographic changes caused by the onset of volcanic activity had no impact on DZ populations; 3) heavy mineral types are very similar in all samples, but the local presence of garnet suggests a more restricted contribution in eastern samples; 4) eastern samples also show differences in the DZ population with a more significant BC contribution. HM and DZ results show that proximal sources can modify sediment input character and changing provenance signatures in desert aeolian systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hay, M.
2016-10-24
SRNL analyzed samples from Tank 38H and Tank 43H to support ECP and CCP. The total uranium in the Tank 38H surface sample was 57.6 mg/L, while the sub-surface sample was 106 mg/L. The Tank 43H samples ranged from 50.0 to 51.9 mg/L total uranium. The U-235 percentage was consistent for all four samples at 0.62%. The total uranium and percent U-235 results appear consistent with recent Tank 38H and Tank 43H uranium measurements. The Tank 38H plutonium results show a large difference between the surface and sub-surface sample concentrations and somewhat higher concentrations than previous samples. The Pu-238 concentrationmore » is more than forty times higher in the Tank 38H sub-surface sample than the surface sample. The surface and sub-surface Tank 43H samples contain similar plutonium concentrations and are within the range of values measured on previous samples. The four samples analyzed show silicon concentrations somewhat higher than the previous sample with values ranging from 104 to 213 mg/L.« less
Discriminant WSRC for Large-Scale Plant Species Recognition.
Zhang, Shanwen; Zhang, Chuanlei; Zhu, Yihai; You, Zhuhong
2017-01-01
In sparse representation based classification (SRC) and weighted SRC (WSRC), it is time-consuming to solve the global sparse representation problem. A discriminant WSRC (DWSRC) is proposed for large-scale plant species recognition, including two stages. Firstly, several subdictionaries are constructed by dividing the dataset into several similar classes, and a subdictionary is chosen by the maximum similarity between the test sample and the typical sample of each similar class. Secondly, the weighted sparse representation of the test image is calculated with respect to the chosen subdictionary, and then the leaf category is assigned through the minimum reconstruction error. Different from the traditional SRC and its improved approaches, we sparsely represent the test sample on a subdictionary whose base elements are the training samples of the selected similar class, instead of using the generic overcomplete dictionary on the entire training samples. Thus, the complexity to solving the sparse representation problem is reduced. Moreover, DWSRC is adapted to newly added leaf species without rebuilding the dictionary. Experimental results on the ICL plant leaf database show that the method has low computational complexity and high recognition rate and can be clearly interpreted.
Scalable metagenomic taxonomy classification using a reference genome database
Ames, Sasha K.; Hysom, David A.; Gardner, Shea N.; Lloyd, G. Scott; Gokhale, Maya B.; Allen, Jonathan E.
2013-01-01
Motivation: Deep metagenomic sequencing of biological samples has the potential to recover otherwise difficult-to-detect microorganisms and accurately characterize biological samples with limited prior knowledge of sample contents. Existing metagenomic taxonomic classification algorithms, however, do not scale well to analyze large metagenomic datasets, and balancing classification accuracy with computational efficiency presents a fundamental challenge. Results: A method is presented to shift computational costs to an off-line computation by creating a taxonomy/genome index that supports scalable metagenomic classification. Scalable performance is demonstrated on real and simulated data to show accurate classification in the presence of novel organisms on samples that include viruses, prokaryotes, fungi and protists. Taxonomic classification of the previously published 150 giga-base Tyrolean Iceman dataset was found to take <20 h on a single node 40 core large memory machine and provide new insights on the metagenomic contents of the sample. Availability: Software was implemented in C++ and is freely available at http://sourceforge.net/projects/lmat Contact: allen99@llnl.gov Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23828782
Nanodispersed Suspensions of Zeolite Catalysts for Converting Dimethyl Ether into Olefins
NASA Astrophysics Data System (ADS)
Kolesnichenko, N. V.; Yashina, O. V.; Ezhova, N. N.; Bondarenko, G. N.; Khadzhiev, S. N.
2018-01-01
Nanodispersed suspensions that are effective in DME conversion and stable in the reaction zone in a three-phase system (slurry reactor) are obtained from MFI zeolite commercial samples (TsVM, IK-17-1, and CBV) in liquid media via ultrasonic treatment (UST). It is found that the dispersion medium, in which ultrasound affects zeolite commercial sample, has a large influence on particle size in the suspension. UST in the aqueous medium produces zeolite nanoparticles smaller than 50 nm, while larger particles of MFI zeolite samples form in silicone or hydrocarbon oils. Spectral and adsorption data show that when zeolites undergo UST in an aqueous medium, the acid sites are redistributed on the zeolite surface and the specific surface area of the mesopores increases. Preliminary UST in aqueous media of zeolite commercial samples (TsVM, IK-17-1, and CBV) affects the catalytic properties of MFI zeolite nanodispersed suspensions. The selectivity of samples when paraffins and olefins form is largely due to superacid sites consisting of OH groups of hydroxonium ion H3O+.
Harada, Sei; Hirayama, Akiyoshi; Chan, Queenie; Kurihara, Ayako; Fukai, Kota; Iida, Miho; Kato, Suzuka; Sugiyama, Daisuke; Kuwabara, Kazuyo; Takeuchi, Ayano; Akiyama, Miki; Okamura, Tomonori; Ebbels, Timothy M D; Elliott, Paul; Tomita, Masaru; Sato, Asako; Suzuki, Chizuru; Sugimoto, Masahiro; Soga, Tomoyoshi; Takebayashi, Toru
2018-01-01
Cohort studies with metabolomics data are becoming more widespread, however, large-scale studies involving 10,000s of participants are still limited, especially in Asian populations. Therefore, we started the Tsuruoka Metabolomics Cohort Study enrolling 11,002 community-dwelling adults in Japan, and using capillary electrophoresis-mass spectrometry (CE-MS) and liquid chromatography-mass spectrometry. The CE-MS method is highly amenable to absolute quantification of polar metabolites, however, its reliability for large-scale measurement is unclear. The aim of this study is to examine reproducibility and validity of large-scale CE-MS measurements. In addition, the study presents absolute concentrations of polar metabolites in human plasma, which can be used in future as reference ranges in a Japanese population. Metabolomic profiling of 8,413 fasting plasma samples were completed using CE-MS, and 94 polar metabolites were structurally identified and quantified. Quality control (QC) samples were injected every ten samples and assessed throughout the analysis. Inter- and intra-batch coefficients of variation of QC and participant samples, and technical intraclass correlation coefficients were estimated. Passing-Bablok regression of plasma concentrations by CE-MS on serum concentrations by standard clinical chemistry assays was conducted for creatinine and uric acid. In QC samples, coefficient of variation was less than 20% for 64 metabolites, and less than 30% for 80 metabolites out of the 94 metabolites. Inter-batch coefficient of variation was less than 20% for 81 metabolites. Estimated technical intraclass correlation coefficient was above 0.75 for 67 metabolites. The slope of Passing-Bablok regression was estimated as 0.97 (95% confidence interval: 0.95, 0.98) for creatinine and 0.95 (0.92, 0.96) for uric acid. Compared to published data from other large cohort measurement platforms, reproducibility of metabolites common to the platforms was similar to or better than in the other studies. These results show that our CE-MS platform is suitable for conducting large-scale epidemiological studies.
Yu, Qiang; Wei, Dingbang; Huo, Hongwei
2018-06-18
Given a set of t n-length DNA sequences, q satisfying 0 < q ≤ 1, and l and d satisfying 0 ≤ d < l < n, the quorum planted motif search (qPMS) finds l-length strings that occur in at least qt input sequences with up to d mismatches and is mainly used to locate transcription factor binding sites in DNA sequences. Existing qPMS algorithms have been able to efficiently process small standard datasets (e.g., t = 20 and n = 600), but they are too time consuming to process large DNA datasets, such as ChIP-seq datasets that contain thousands of sequences or more. We analyze the effects of t and q on the time performance of qPMS algorithms and find that a large t or a small q causes a longer computation time. Based on this information, we improve the time performance of existing qPMS algorithms by selecting a sample sequence set D' with a small t and a large q from the large input dataset D and then executing qPMS algorithms on D'. A sample sequence selection algorithm named SamSelect is proposed. The experimental results on both simulated and real data show (1) that SamSelect can select D' efficiently and (2) that the qPMS algorithms executed on D' can find implanted or real motifs in a significantly shorter time than when executed on D. We improve the ability of existing qPMS algorithms to process large DNA datasets from the perspective of selecting high-quality sample sequence sets so that the qPMS algorithms can find motifs in a short time in the selected sample sequence set D', rather than take an unfeasibly long time to search the original sequence set D. Our motif discovery method is an approximate algorithm.
Biological tissue imaging with a position and time sensitive pixelated detector.
Jungmann, Julia H; Smith, Donald F; MacAleese, Luke; Klinkert, Ivo; Visser, Jan; Heeren, Ron M A
2012-10-01
We demonstrate the capabilities of a highly parallel, active pixel detector for large-area, mass spectrometric imaging of biological tissue sections. A bare Timepix assembly (512 × 512 pixels) is combined with chevron microchannel plates on an ion microscope matrix-assisted laser desorption time-of-flight mass spectrometer (MALDI TOF-MS). The detector assembly registers position- and time-resolved images of multiple m/z species in every measurement frame. We prove the applicability of the detection system to biomolecular mass spectrometry imaging on biologically relevant samples by mass-resolved images from Timepix measurements of a peptide-grid benchmark sample and mouse testis tissue slices. Mass-spectral and localization information of analytes at physiologic concentrations are measured in MALDI-TOF-MS imaging experiments. We show a high spatial resolution (pixel size down to 740 × 740 nm(2) on the sample surface) and a spatial resolving power of 6 μm with a microscope mode laser field of view of 100-335 μm. Automated, large-area imaging is demonstrated and the Timepix' potential for fast, large-area image acquisition is highlighted.
Trace Elements in Cretaceous-Tertiary Boundary Clay at Gubbio, Italy
NASA Astrophysics Data System (ADS)
Ebihara, M.; Miura, T.
1992-07-01
In 1980, Alvarez et al. reported high Ir concentrations for the Cretaceous-Tertiary (hereafter, K/T) boundary layer, suggesting an impact of extraterrestrial material as a possible cause of the sudden mass extinction at the end of the Cretaceous period. Since then, high Ir abundances have been reported for K/T layers all over the world. Iridium enrichments were alternatively explained in terms of volcanic eruptions (Officer and Drake, 1982) or sedimentation (Zoller et al, 1982). Thus, abundances of Ir only cannot be critical in explaining the cause of the mass extinctions at the K/T boundary. In contrast to the fairly large number of Ir data for K/T boundary geological materials, only limited data are available for other siderophile elements. Relative abundances of siderophiles must be more informative in considering the causes of extinction, and provide further data on the type of extraterrestrial material of the projectile if siderophile abundances are in favor of an impact as the cause of the mass extinction at the K/T boundary. Thus, we analyzed additional K/T boundary materials for trace elements, including some of the siderophiles. A total of 7 samples collected from the K/T boundary near Gubbio, Italy (three from Bottaccione, four from Contessa) were analyzed. For comparison, we analyzed three additional samples, one from a Cretaceous sediment layer and the remaining two from a Tertiary layer. Four siderophile elements (Ir, Pt, Au, and Pd) were measured by RNAA and more than 25 elements, including 9 lanthanoids, were measured by INAA. The siderophiles listed above and Ni were found to be present in all of the boundary clay samples. They have C1-normalized abundances of 0.02 for Ni, Ir, and Pt, 0.04 for Pd, and Au was exceptionally depleted at 0.005. Both Ni and Ir show fairly small variations in abundances among the clay samples, whereas the other three elements show quite large variations, exceeding error limits. We believe that similar enrichments for these siderophiles in the K/T boundary clays were caused by an impact of extraterrestrial material having siderophiles that have not been largely fractionated. Similar abundance patterns of REE were confirmed not only for clay samples but also for the Cretaceous and Tertiary sediments. This suggests that sedimentation continued in similar circumstances without a large disturbance at the K/T boundary. We confirmed excellent correlations among Ir, As, and Sb abundances in the K/T samples, suggesting that they had a similar solution chemistry when sedimentation occurred. Both As and Sb show similar abundances, even for the Cretaceous as well as the Tertiary sediments, while Ir does not. Neither Pd nor Pt shows any correlation with these elements or with each other. This suggests that Ir was trapped into the clay together with As and Sb, but not with Pd or Pt. It is highly unlikely that these siderophiles were supplied only from sea water, and were eventually greatly enriched in clay materials, with the relative elemental abundances coinciding with those in chondrites. Thus, our data strongly suggest that a large impact of extraterrestrial material (chondritic?) caused the enrichment of siderophiles at K/T boundary. Acknowledgment. We are indebted to M. Ozima and S. Amari for samples analyzed in this work. References Alvarez, L.W., Alvarez, W., Asaro, F., and Michel, H.V. (1980) Science 208, 1095-1108. Officer, C.B. and Drake, C.L. (1982) Science 219, 1383-1390. Zoller, W.H., Parrington, J.R., and Kotra, J.M.P. (1983) Science 222, 1118-1120.
Inception of Snapover and Gas Induced Glow Discharges
NASA Technical Reports Server (NTRS)
Galofaro, J. T.; Vayner, B. V.; Degroot, W. A.; Ferguson, D. C.; Thomson, C. D.; Dennison, J. R.; Davies, R. E.
2000-01-01
Ground based experiments of the snapover phenomenon were conducted in the large vertical simulation chamber at the Glenn Research Center (GRC) Plasma Interaction Facility (PIF). Two Penning sources provided both argon and xenon plasmas for the experiments. The sources were used to simulate a variety of ionospheric densities pertaining to a spacecraft in a Low Earth Orbital (LEO) environment. Secondary electron emission is believed responsible for dielectric surface charging, and all subsequent snapover phenomena observed. Voltage sweeps of conductor potentials versus collected current were recorded in order to examine the specific charging history of each sample. The average time constant for sample charging was estimated between 25 and 50 seconds for all samples. It appears that current drops off by approximately a factor of 3 over the charging time of the sample. All samples charged in the forward and reverse bias directions, demonstrated hysteresis. Current jumps were only observed in the forward or positive swept voltage direction. There is large dispersion in tile critical snapover potential when repeating sweeps on any one sample. The current ratio for the first snapover region jumps between 2 and 4.6 times, with a standard deviation less than 1.6. Two of the samples showed even larger current ratios. It is believed the second large snapover region is due to sample outgassing. Under certain preset conditions, namely at the higher neutral gas background pressures, a perceptible blue-green glow was observed around the conductor. The glow is believed to be a result of secondary electrons undergoing collisions with an expelled tenuous cloud of gas, that is outgassed from the sample. Spectroscopic measurements of the glow discharge were made in an attempt to identify specific lines contributing to the observed glow.
Presolar Materials in a Giant Cluster IDP of Probable Cometary Origin
NASA Technical Reports Server (NTRS)
Messenger, S.; Brownlee, D. E.; Joswiak, D. J.; Nguyen, A. N.
2015-01-01
Chondritic porous interplanetary dust particles (CP-IDPs) have been linked to comets by their fragile structure, primitive mineralogy, dynamics, and abundant interstellar materials. But differences have emerged between 'cometary' CP-IDPs and comet 81P/Wild 2 Stardust Mission samples. Particles resembling Ca-Al-rich inclusions (CAIs), chondrules, and amoeboid olivine aggregates (AOAs) in Wild 2 samples are rare in CP-IDPs. Unlike IDPs, presolar materials are scarce in Wild 2 samples. These differences may be due to selection effects, such as destruction of fine grained (presolar) components during the 6 km/s aerogel impact collection of Wild 2 samples. Large refractory grains observed in Wild 2 samples are also unlikely to be found in most (less than 30 micrometers) IDPs. Presolar materials provide a measure of primitive-ness of meteorites and IDPs. Organic matter in IDPs and chondrites shows H and N isotopic anomalies attributed to low-T interstellar or protosolar disk chemistry, where the largest anomalies occur in the most primitive samples. Presolar silicates are abundant in meteorites with low levels of aqueous alteration (Acfer 094 approximately 200 ppm) and scarce in altered chondrites (e.g. Semarkona approximately 20 ppm). Presolar silicates in minimally altered CP-IDPs range from approximately 400 ppm to 15,000 ppm, possibly reflecting variable levels of destruction in the solar nebula or statistical variations due to small sample sizes. Here we present preliminary isotopic and mineralogical studies of a very large CP-IDP. The goals of this study are to more accurately determine the abundances of presolar components of CP-IDP material for comparison with comet Wild 2 samples and meteorites. The large mass of this IDP presents a unique opportunity to accurately determine the abundance of pre-solar grains in a likely cometary sample.
Analysis of the research sample collections of Uppsala biobank.
Engelmark, Malin T; Beskow, Anna H
2014-10-01
Uppsala Biobank is the joint and only biobank organization of the two principals, Uppsala University and Uppsala University Hospital. Biobanks are required to have updated registries on sample collection composition and management in order to fulfill legal regulations. We report here the results from the first comprehensive and overall analysis of the 131 research sample collections organized in the biobank. The results show that the median of the number of samples in the collections was 700 and that the number of samples varied from less than 500 to over one million. Blood samples, such as whole blood, serum, and plasma, were included in the vast majority, 84.0%, of the research sample collections. Also, as much as 95.5% of the newly collected samples within healthcare included blood samples, which further supports the concept that blood samples have fundamental importance for medical research. Tissue samples were also commonly used and occurred in 39.7% of the research sample collections, often combined with other types of samples. In total, 96.9% of the 131 sample collections included samples collected for healthcare, showing the importance of healthcare as a research infrastructure. Of the collections that had accessed existing samples from healthcare, as much as 96.3% included tissue samples from the Department of Pathology, which shows the importance of pathology samples as a resource for medical research. Analysis of different research areas shows that the most common of known public health diseases are covered. Collections that had generated the most publications, up to over 300, contained a large number of samples collected systematically and repeatedly over many years. More knowledge about existing biobank materials, together with public registries on sample collections, will support research collaborations, improve transparency, and bring us closer to the goals of biobanks, which is to save and prolong human lives and improve health and quality of life.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.
Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,”more » has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer« less
Data is presented showing the progress made towards the development of a new automated system combining solid phase extraction (SPE) with gas chromatography/mass spectrometry for the single run analysis of water samples containing a broad range of acid, base and neutral compounds...
K-Nearest Neighbor Algorithm Optimization in Text Categorization
NASA Astrophysics Data System (ADS)
Chen, Shufeng
2018-01-01
K-Nearest Neighbor (KNN) classification algorithm is one of the simplest methods of data mining. It has been widely used in classification, regression and pattern recognition. The traditional KNN method has some shortcomings such as large amount of sample computation and strong dependence on the sample library capacity. In this paper, a method of representative sample optimization based on CURE algorithm is proposed. On the basis of this, presenting a quick algorithm QKNN (Quick k-nearest neighbor) to find the nearest k neighbor samples, which greatly reduces the similarity calculation. The experimental results show that this algorithm can effectively reduce the number of samples and speed up the search for the k nearest neighbor samples to improve the performance of the algorithm.
A large-scale cryoelectronic system for biological sample banking
NASA Astrophysics Data System (ADS)
Shirley, Stephen G.; Durst, Christopher H. P.; Fuchs, Christian C.; Zimmermann, Heiko; Ihmig, Frank R.
2009-11-01
We describe a polymorphic electronic infrastructure for managing biological samples stored over liquid nitrogen. As part of this system we have developed new cryocontainers and carrier plates attached to Flash memory chips to have a redundant and portable set of data at each sample. Our experimental investigations show that basic Flash operation and endurance is adequate for the application down to liquid nitrogen temperatures. This identification technology can provide the best sample identification, documentation and tracking that brings added value to each sample. The first application of the system is in a worldwide collaborative research towards the production of an AIDS vaccine. The functionality and versatility of the system can lead to an essential optimization of sample and data exchange for global clinical studies.
NASA Astrophysics Data System (ADS)
Hanasaki, Itsuo; Kawano, Satoyuki
2013-11-01
Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility.
Quantitative nanoscopy: Tackling sampling limitations in (S)TEM imaging of polymers and composites.
Gnanasekaran, Karthikeyan; Snel, Roderick; de With, Gijsbertus; Friedrich, Heiner
2016-01-01
Sampling limitations in electron microscopy questions whether the analysis of a bulk material is representative, especially while analyzing hierarchical morphologies that extend over multiple length scales. We tackled this problem by automatically acquiring a large series of partially overlapping (S)TEM images with sufficient resolution, subsequently stitched together to generate a large-area map using an in-house developed acquisition toolbox (TU/e Acquisition ToolBox) and stitching module (TU/e Stitcher). In addition, we show that quantitative image analysis of the large scale maps provides representative information that can be related to the synthesis and process conditions of hierarchical materials, which moves electron microscopy analysis towards becoming a bulk characterization tool. We demonstrate the power of such an analysis by examining two different multi-phase materials that are structured over multiple length scales. Copyright © 2015 Elsevier B.V. All rights reserved.
Zhou, Juntuo; Liu, Huiying; Liu, Yang; Liu, Jia; Zhao, Xuyang; Yin, Yuxin
2016-04-19
Recent advances in mass spectrometers which have yielded higher resolution and faster scanning speeds have expanded their application in metabolomics of diverse diseases. Using a quadrupole-Orbitrap LC-MS system, we developed an efficient large-scale quantitative method targeting 237 metabolites involved in various metabolic pathways using scheduled, parallel reaction monitoring (PRM). We assessed the dynamic range, linearity, reproducibility, and system suitability of the PRM assay by measuring concentration curves, biological samples, and clinical serum samples. The quantification performances of PRM and MS1-based assays in Q-Exactive were compared, and the MRM assay in QTRAP 6500 was also compared. The PRM assay monitoring 237 polar metabolites showed greater reproducibility and quantitative accuracy than MS1-based quantification and also showed greater flexibility in postacquisition assay refinement than the MRM assay in QTRAP 6500. We present a workflow for convenient PRM data processing using Skyline software which is free of charge. In this study we have established a reliable PRM methodology on a quadrupole-Orbitrap platform for evaluation of large-scale targeted metabolomics, which provides a new choice for basic and clinical metabolomics study.
Species collapse via hybridization in Darwin's tree finches.
Kleindorfer, Sonia; O'Connor, Jody A; Dudaniec, Rachael Y; Myers, Steven A; Robertson, Jeremy; Sulloway, Frank J
2014-03-01
Species hybridization can lead to fitness costs, species collapse, and novel evolutionary trajectories in changing environments. Hybridization is predicted to be more common when environmental conditions change rapidly. Here, we test patterns of hybridization in three sympatric tree finch species (small tree finch Camarhynchus parvulus, medium tree finch Camarhynchus pauper, and large tree finch: Camarhynchus psittacula) that are currently recognized on Floreana Island, Galápagos Archipelago. Genetic analysis of microsatellite data from contemporary samples showed two genetic populations and one hybrid cluster in both 2005 and 2010; hybrid individuals were derived from genetic population 1 (small morph) and genetic population 2 (large morph). Females of the large and rare species were more likely to pair with males of the small common species. Finch populations differed in morphology in 1852-1906 compared with 2005/2010. An unsupervised clustering method showed (a) support for three morphological clusters in the historical tree finch sample (1852-1906), which is consistent with current species recognition; (b) support for two or three morphological clusters in 2005 with some (19%) hybridization; and (c) support for just two morphological clusters in 2010 with frequent (41%) hybridization. We discuss these findings in relation to species demarcations of Camarhynchus tree finches on Floreana Island.
Biases in the OSSOS Detection of Large Semimajor Axis Trans-Neptunian Objects
NASA Astrophysics Data System (ADS)
Gladman, Brett; Shankman, Cory; OSSOS Collaboration
2017-10-01
The accumulating but small set of large semimajor axis trans-Neptunian objects (TNOs) shows an apparent clustering in the orientations of their orbits. This clustering must either be representative of the intrinsic distribution of these TNOs, or else have arisen as a result of observation biases and/or statistically expected variations for such a small set of detected objects. The clustered TNOs were detected across different and independent surveys, which has led to claims that the detections are therefore free of observational bias. This apparent clustering has led to the so-called “Planet 9” hypothesis that a super-Earth currently resides in the distant solar system and causes this clustering. The Outer Solar System Origins Survey (OSSOS) is a large program that ran on the Canada-France-Hawaii Telescope from 2013 to 2017, discovering more than 800 new TNOs. One of the primary design goals of OSSOS was the careful determination of observational biases that would manifest within the detected sample. We demonstrate the striking and non-intuitive biases that exist for the detection of TNOs with large semimajor axes. The eight large semimajor axis OSSOS detections are an independent data set, of comparable size to the conglomerate samples used in previous studies. We conclude that the orbital distribution of the OSSOS sample is consistent with being detected from a uniform underlying angular distribution.
OSSOS. VI. Striking Biases in the Detection of Large Semimajor Axis Trans-Neptunian Objects
NASA Astrophysics Data System (ADS)
Shankman, Cory; Kavelaars, J. J.; Bannister, Michele T.; Gladman, Brett J.; Lawler, Samantha M.; Chen, Ying-Tung; Jakubik, Marian; Kaib, Nathan; Alexandersen, Mike; Gwyn, Stephen D. J.; Petit, Jean-Marc; Volk, Kathryn
2017-08-01
The accumulating but small set of large semimajor axis trans-Neptunian objects (TNOs) shows an apparent clustering in the orientations of their orbits. This clustering must either be representative of the intrinsic distribution of these TNOs, or else have arisen as a result of observation biases and/or statistically expected variations for such a small set of detected objects. The clustered TNOs were detected across different and independent surveys, which has led to claims that the detections are therefore free of observational bias. This apparent clustering has led to the so-called “Planet 9” hypothesis that a super-Earth currently resides in the distant solar system and causes this clustering. The Outer Solar System Origins Survey (OSSOS) is a large program that ran on the Canada–France–Hawaii Telescope from 2013 to 2017, discovering more than 800 new TNOs. One of the primary design goals of OSSOS was the careful determination of observational biases that would manifest within the detected sample. We demonstrate the striking and non-intuitive biases that exist for the detection of TNOs with large semimajor axes. The eight large semimajor axis OSSOS detections are an independent data set, of comparable size to the conglomerate samples used in previous studies. We conclude that the orbital distribution of the OSSOS sample is consistent with being detected from a uniform underlying angular distribution.
Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus
2018-01-01
Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.
Imchen, Madangchanok; Kumavath, Ranjith; Barh, Debmalya; Azevedo, Vasco; Ghosh, Preetam; Viana, Marcus; Wattam, Alice R
2017-08-18
In this study, we categorize the microbial community in mangrove sediment samples from four different locations within a vast mangrove system in Kerala, India. We compared this data to other samples taken from the other known mangrove data, a tropical rainforest, and ocean sediment. An examination of the microbial communities from a large mangrove forest that stretches across southwestern India showed strong similarities across the higher taxonomic levels. When ocean sediment and a single isolate from a tropical rain forest were included in the analysis, a strong pattern emerged with Bacteria from the phylum Proteobacteria being the prominent taxon among the forest samples. The ocean samples were predominantly Archaea, with Euryarchaeota as the dominant phylum. Principal component and functional analyses grouped the samples isolated from forests, including those from disparate mangrove forests and the tropical rain forest, from the ocean. Our findings show similar patterns in samples were isolated from forests, and these were distinct from the ocean sediment isolates. The taxonomic structure was maintained to the level of class, and functional analysis of the genes present also displayed these similarities. Our report for the first time shows the richness of microbial diversity in the Kerala coast and its differences with tropical rain forest and ocean microbiome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Hard X-ray tests of the unified model for an ultraviolet-detected sample of Seyfert 2 galaxies
NASA Technical Reports Server (NTRS)
Mulchaey, John S.; Myshotzky, Richard F.; Weaver, Kimberly A.
1992-01-01
An ultraviolet-detected sample of Seyfert 2 galaxies shows heavy photoelectric absorption in the hard X-ray band. The presence of UV emission combined with hard X-ray absorption argues strongly for a special geometry which must have the general properties of the Antonucci and Miller unified model. The observations of this sample are consistent with the picture in which the hard X-ray photons are viewed directly through the obscuring matter (molecular torus?) and the optical, UV, and soft X-ray continuum are seen in scattered light. The large range in X-ray column densities implies that there must be a large variation in intrinsic thicknesses of molecular tori, an assumption not found in the simplest of unified models. Furthermore, constraints based on the cosmic X-ray background suggest that some of the underlying assumptions of the unified model are wrong.
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
2017-04-12
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Characterization of air contaminants formed by the interaction of lava and sea water.
Kullman, G J; Jones, W G; Cornwell, R J; Parker, J E
1994-05-01
We made environmental measurements to characterize contaminants generated when basaltic lava from Hawaii's Kilauea volcano enters sea water. This interaction of lava with sea water produces large clouds of mist (LAZE). Island winds occasionally directed the LAZE toward the adjacent village of Kalapana and the Hawaii Volcanos National Park, creating health concerns. Environmental samples were taken to measure airborne concentrations of respirable dust, crystalline silica and other mineral compounds, fibers, trace metals, inorganic acids, and organic and inorganic gases. The LAZE contained quantifiable concentrations of hydrochloric acid (HCl) and hydrofluoric acid (HF); HCl was predominant. HCl and HF concentrations were highest in dense plumes of LAZE near the sea. The HCl concentration at this sampling location averaged 7.1 ppm; this exceeds the current occupational exposure ceiling of 5 ppm. HF was detected in nearly half the samples, but all concentrations were <1 ppm Sulfur dioxide was detected in one of four short-term indicator tube samples at approximately 1.5 ppm. Airborne particulates were composed largely of chloride salts (predominantly sodium chloride). Crystalline silica concentrations were below detectable limits, less than approximately 0.03 mg/m3 of air. Settled dust samples showed a predominance of glass flakes and glass fibers. Airborne fibers were detected at quantifiable levels in 1 of 11 samples. These fibers were composed largely of hydrated calcium sulfate. These findings suggest that individuals should avoid concentrated plumes of LAZE near its origin to prevent over exposure to inorganic acids, specifically HCl.
Hannett, George E.; Stone, Ward B.; Davis, Stephen W.; Wroblewski, Danielle
2011-01-01
The genetic relatedness of Clostridium botulinum type E isolates associated with an outbreak of wildlife botulism was studied using random amplification of polymorphic DNA (RAPD). Specimens were collected from November 2000 to December 2008 during a large outbreak of botulism affecting birds and fish living in and around Lake Erie and Lake Ontario. In our present study, a total of 355 wildlife samples were tested for the presence of botulinum toxin and/or organisms. Type E botulinum toxin was detected in 110 samples from birds, 12 samples from fish, and 2 samples from mammals. Sediment samples from Lake Erie were also examined for the presence of C. botulinum. Fifteen of 17 sediment samples were positive for the presence of C. botulinum type E. Eighty-one C. botulinum isolates were obtained from plants, animals, and sediments; of these isolates, 44 C. botulinum isolates produced type E toxin, as determined by mouse bioassay, while the remaining 37 isolates were not toxic for mice. All toxin-producing isolates were typed by RAPD; that analysis showed 12 different RAPD types and multiple subtypes. Our study thus demonstrates that multiple genetically distinct strains of C. botulinum were involved in the present outbreak of wildlife botulism. We found that C. botulinum type E is present in the sediments of Lake Erie and that a large range of bird and fish species is affected. PMID:21115703
Repeat sample intraocular pressure variance in induced and naturally ocular hypertensive monkeys.
Dawson, William W; Dawson, Judyth C; Hope, George M; Brooks, Dennis E; Percicot, Christine L
2005-12-01
To compare repeat-sample means variance of laser induced ocular hypertension (OH) in rhesus monkeys with the repeat-sample mean variance of natural OH in age-range matched monkeys of similar and dissimilar pedigrees. Multiple monocular, retrospective, intraocular pressure (IOP) measures were recorded repeatedly during a short sampling interval (SSI, 1-5 months) and a long sampling interval (LSI, 6-36 months). There were 5-13 eyes in each SSI and LSI subgroup. Each interval contained subgroups from the Florida with natural hypertension (NHT), induced hypertension (IHT1) Florida monkeys, unrelated (Strasbourg, France) induced hypertensives (IHT2), and Florida age-range matched controls (C). Repeat-sample individual variance means and related IOPs were analyzed by a parametric analysis of variance (ANOV) and results compared to non-parametric Kruskal-Wallis ANOV. As designed, all group intraocular pressure distributions were significantly different (P < or = 0.009) except for the two (Florida/Strasbourg) induced OH groups. A parametric 2 x 4 design ANOV for mean variance showed large significant effects due to treatment group and sampling interval. Similar results were produced by the nonparametric ANOV. Induced OH sample variance (LSI) was 43x the natural OH sample variance-mean. The same relationship for the SSI was 12x. Laser induced ocular hypertension in rhesus monkeys produces large IOP repeat-sample variance mean results compared to controls and natural OH.
Effects of Sample Selection Bias on the Accuracy of Population Structure and Ancestry Inference
Shringarpure, Suyash; Xing, Eric P.
2014-01-01
Population stratification is an important task in genetic analyses. It provides information about the ancestry of individuals and can be an important confounder in genome-wide association studies. Public genotyping projects have made a large number of datasets available for study. However, practical constraints dictate that of a geographical/ethnic population, only a small number of individuals are genotyped. The resulting data are a sample from the entire population. If the distribution of sample sizes is not representative of the populations being sampled, the accuracy of population stratification analyses of the data could be affected. We attempt to understand the effect of biased sampling on the accuracy of population structure analysis and individual ancestry recovery. We examined two commonly used methods for analyses of such datasets, ADMIXTURE and EIGENSOFT, and found that the accuracy of recovery of population structure is affected to a large extent by the sample used for analysis and how representative it is of the underlying populations. Using simulated data and real genotype data from cattle, we show that sample selection bias can affect the results of population structure analyses. We develop a mathematical framework for sample selection bias in models for population structure and also proposed a correction for sample selection bias using auxiliary information about the sample. We demonstrate that such a correction is effective in practice using simulated and real data. PMID:24637351
Thermoelectric Performance of Na-Doped GeSe
2017-01-01
Recently, hole-doped GeSe materials have been predicted to exhibit extraordinary thermoelectric performance owing largely to extremely low thermal conductivity. However, experimental research on the thermoelectric properties of GeSe has received less attention. Here, we have synthesized polycrystalline Na-doped GeSe compounds, characterized their crystal structure, and measured their thermoelectric properties. The Seebeck coefficient decreases with increasing Na content up to x = 0.01 due to an increase in the hole carrier concentration and remains roughly constant at higher concentrations of Na, consistent with the electrical resistivity variation. However, the electrical resistivity is large for all samples, leading to low power factors. Powder X-ray diffraction and scanning electron microscopy/energy-dispersive spectrometry results show the presence of a ternary impurity phase within the GeSe matrix for all doped samples, which suggests that the optimal carrier concentration cannot be reached by doping with Na. Nevertheless, the lattice thermal conductivity and carrier mobility of GeSe is similar to those of polycrystalline samples of the leading thermoelectric material SnSe, leading to quality factors of comparable magnitude. This implies that GeSe shows promise as a thermoelectric material if a more suitable dopant can be found. PMID:29302637
Flexible conformable hydrophobized surfaces for turbulent flow drag reduction
NASA Astrophysics Data System (ADS)
Brennan, Joseph C.; Geraldi, Nicasio R.; Morris, Robert H.; Fairhurst, David J.; McHale, Glen; Newton, Michael I.
2015-05-01
In recent years extensive work has been focused onto using superhydrophobic surfaces for drag reduction applications. Superhydrophobic surfaces retain a gas layer, called a plastron, when submerged underwater in the Cassie-Baxter state with water in contact with the tops of surface roughness features. In this state the plastron allows slip to occur across the surface which results in a drag reduction. In this work we report flexible and relatively large area superhydrophobic surfaces produced using two different methods: Large roughness features were created by electrodeposition on copper meshes; Small roughness features were created by embedding carbon nanoparticles (soot) into Polydimethylsiloxane (PDMS). Both samples were made into cylinders with a diameter under 12 mm. To characterize the samples, scanning electron microscope (SEM) images and confocal microscope images were taken. The confocal microscope images were taken with each sample submerged in water to show the extent of the plastron. The hydrophobized electrodeposited copper mesh cylinders showed drag reductions of up to 32% when comparing the superhydrophobic state with a wetted out state. The soot covered cylinders achieved a 30% drag reduction when comparing the superhydrophobic state to a plain cylinder. These results were obtained for turbulent flows with Reynolds numbers 10,000 to 32,500.
Diversity of epothilone producers among Sorangium strains in producer-positive soil habitats.
Li, Shu-Guang; Zhao, Lin; Han, Kui; Li, Peng-Fei; Li, Zhi-Feng; Hu, Wei; Liu, Hong; Wu, Zhi-Hong; Li, Yue-Zhong
2014-03-01
Large-scale surveys show that the anti-tumour compounds known as epothilones are produced by only a small proportion of Sorangium strains, thereby greatly hampering the research and development of these valuable compounds. In this study, to investigate the niche diversity of epothilone-producing Sorangium strains, we re-surveyed four soil samples where epothilone producers were previously found. Compared with the < 2.5% positive strains collected from different places, epothilone producers comprised 25.0-75.0% of the Sorangium isolates in these four positive soil samples. These sympatric epothilone producers differed not only in their 16S rRNA gene sequences and morphologies but also in their production of epothilones and biosynthesis genes. A further exploration of 14 soil samples collected from a larger area around a positive site showed a similar high positive ratio of epothilone producers among the Sorangium isolates. The present results suggest that, in an area containing epothilone producers, the long-term genetic variations and refinements resulting from selective pressure form a large reservoir of epothilone-producing Sorangium strains with diverse genetic compositions. © 2013 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
Large Scale Structure Studies: Final Results from a Rich Cluster Redshift Survey
NASA Astrophysics Data System (ADS)
Slinglend, K.; Batuski, D.; Haase, S.; Hill, J.
1995-12-01
The results from the COBE satellite show the existence of structure on scales on the order of 10% or more of the horizon scale of the universe. Rich clusters of galaxies from the Abell-ACO catalogs show evidence of structure on scales of 100 Mpc and hold the promise of confirming structure on the scale of the COBE result. Unfortunately, until now, redshift information has been unavailable for a large percentage of these clusters, so present knowledge of their three dimensional distribution has quite large uncertainties. Our approach in this effort has been to use the MX multifiber spectrometer on the Steward 2.3m to measure redshifts of at least ten galaxies in each of 88 Abell cluster fields with richness class R>= 1 and mag10 <= 16.8 (estimated z<= 0.12) and zero or one measured redshifts. This work has resulted in a deeper, 95% complete and more reliable sample of 3-D positions of rich clusters. The primary intent of this survey has been to constrain theoretical models for the formation of the structure we see in the universe today through 2-pt. spatial correlation function and other analyses of the large scale structures traced by these clusters. In addition, we have obtained enough redshifts per cluster to greatly improve the quality and size of the sample of reliable cluster velocity dispersions available for use in other studies of cluster properties. This new data has also allowed the construction of an updated and more reliable supercluster candidate catalog. Our efforts have resulted in effectively doubling the volume traced by these clusters. Presented here is the resulting 2-pt. spatial correlation function, as well as density plots and several other figures quantifying the large scale structure from this much deeper and complete sample. Also, with 10 or more redshifts in most of our cluster fields, we have investigated the extent of projection effects within the Abell catalog in an effort to quantify and understand how this may effect the Abell sample.
Mechanical Twinning and Microstructures in Experimentally Stressed Quartzite
NASA Astrophysics Data System (ADS)
Minor, A.; Sintubin, M.; Wenk, H. R.; Rybacki, E.
2015-12-01
Since Dauphiné twins in quartz have been identified as a stress-related intracrystalline microstructure, several electron backscatter diffraction (EBSD) studies revealed that Dauphiné twins are present in naturally deformed quartz-bearing rocks in a wide range of tectono-metamorphic conditions. EBSD studies on experimentally stressed quartzite showed that crystals with particular crystallographic orientations contain many Dauphiné twin boundaries, while neighboring crystals with different orientations are largely free of twin boundaries. To understand the relationship between stress direction and orientation of Dauphiné twinned quartz crystals, a detailed EBSD study was performed on experimentally stressed quartzite samples and compared with an undeformed reference sample. We stressed 4 cylindrical samples in triaxial compression in a Paterson type gas deformation apparatus at GFZ Potsdam. Experimental conditions were 300MPa confining pressure, 500°C temperature and axial stresses of 145MPa, 250MPa and 460MPa for about 30 hours, resulting in a minor strain <0.04%. EBSD scans were obtained with a Zeiss Evo scanning electron microscope and TSL software at UC Berkeley. The EBSD maps show that Dauphiné twinning is present in the starting material as well as in experimentally stressed samples. Pole figures of the bulk orientation of the reference sample compared with stressed samples show a significant difference regarding the distribution for the r and z directions. The reference sample shows an indistinct maximum for r and z, whereas the stressed samples show a maximum for r poles and a minimum for z poles in the axial stress direction. EBSD scans of the reference and stressed samples were further analyzed manually to identify the orientations of single grains, which are free of twin boundaries and those, which contain twin boundaries. This analysis aims to quantify the relationship of crystal orientation and stress magnitude to initiate mechanical twinning.
Rohde, Alexander; Hammerl, Jens Andre; Appel, Bernd; Dieckmann, Ralf; Al Dahouk, Sascha
2015-01-01
Efficient preparation of food samples, comprising sampling and homogenization, for microbiological testing is an essential, yet largely neglected, component of foodstuff control. Salmonella enterica spiked chicken breasts were used as a surface contamination model whereas salami and meat paste acted as models of inner-matrix contamination. A systematic comparison of different homogenization approaches, namely, stomaching, sonication, and milling by FastPrep-24 or SpeedMill, revealed that for surface contamination a broad range of sample pretreatment steps is applicable and loss of culturability due to the homogenization procedure is marginal. In contrast, for inner-matrix contamination long treatments up to 8 min are required and only FastPrep-24 as a large-volume milling device produced consistently good recovery rates. In addition, sampling of different regions of the spiked sausages showed that pathogens are not necessarily homogenously distributed throughout the entire matrix. Instead, in meat paste the core region contained considerably more pathogens compared to the rim, whereas in the salamis the distribution was more even with an increased concentration within the intermediate region of the sausages. Our results indicate that sampling and homogenization as integral parts of food microbiology and monitoring deserve more attention to further improve food safety.
Rohde, Alexander; Hammerl, Jens Andre; Appel, Bernd; Dieckmann, Ralf; Al Dahouk, Sascha
2015-01-01
Efficient preparation of food samples, comprising sampling and homogenization, for microbiological testing is an essential, yet largely neglected, component of foodstuff control. Salmonella enterica spiked chicken breasts were used as a surface contamination model whereas salami and meat paste acted as models of inner-matrix contamination. A systematic comparison of different homogenization approaches, namely, stomaching, sonication, and milling by FastPrep-24 or SpeedMill, revealed that for surface contamination a broad range of sample pretreatment steps is applicable and loss of culturability due to the homogenization procedure is marginal. In contrast, for inner-matrix contamination long treatments up to 8 min are required and only FastPrep-24 as a large-volume milling device produced consistently good recovery rates. In addition, sampling of different regions of the spiked sausages showed that pathogens are not necessarily homogenously distributed throughout the entire matrix. Instead, in meat paste the core region contained considerably more pathogens compared to the rim, whereas in the salamis the distribution was more even with an increased concentration within the intermediate region of the sausages. Our results indicate that sampling and homogenization as integral parts of food microbiology and monitoring deserve more attention to further improve food safety. PMID:26539462
Coorevits, L; Heytens, S; Boelens, J; Claeys, G
2017-04-01
The workup and interpretation of urine cultures is not always clear-cut, especially for midstream samples contaminated with commensals. Standard urine culture (SUC) protocols are designed in favor of growth of uropathogens at the expense of commensals. In selected clinical situations, however, it is essential to trace fastidious or new uropathogens by expanding the urine culture conditions (EUC). The aim of our study was to map the microflora in midstream urine specimens from healthy controls by means of EUC, in view of the interpretation of bacterial culture results in symptomatic patients. Midstream urine specimens from 101 healthy controls (86 females and 15 males) were examined using both SUC and EUC. Whilst 73 % of samples examined by SUC showed no growth at 10 3 colony-forming units (CFU)/mL, 91 % of samples examined by EUC grew bacterial species in large numbers (≥10 4 CFU/mL). Asymptomatic bacteriuria, as defined by the European guidelines for urinalysis, was detected in six samples with both protocols. EUC revealed 98 different species, mostly Lactobacillus, Staphylococcus, Streptococcus, and Corynebacterium. None of the samples grew Staphylococcus saprophyticus, Corynebacterium urealyticum, or Aerococcus urinae. Samples from females contained higher bacterial loads and showed higher bacterial diversity compared to males. Midstream urine of healthy controls contains large communities of living bacteria that comprise a resident microflora, only revealed by EUC. Hence, the use of EUC instead of SUC in a routine setting would result in more sensitive but less specific results, requiring critical interpretation. In our view, EUC should be reserved for limited indications.
Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang
2011-01-01
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452
Cautionary Notes on Cosmogenic W-182 and Other Nuclei in Lunar Samples
NASA Technical Reports Server (NTRS)
Yin, Qingzhu; Jacobsen, Stein B.; Wasserburg, G. J.
2003-01-01
Leya et al. (2000) showed that neutron capture on Ta-181 results in a production rate of Ta-182 (decays with a half-life of 114 days to W-182) sufficiently high to cause significant shifts in W-182 abundances considering the neutron fluences due to the cosmic ray cascade that were known to occur near the lunar surface. Leya et al. concluded that this cosmogenic production of W-182 may explain the large positive epsilon(sub W-182) values that Lee et al. (1997) had reported in some lunar samples rather than being produced from decay of now extinct Hf-182 (bar tau = 13 x 10(exp 6) yr). If the large range in epsilon(sub W-182) of lunar samples (0 to +11 in whole rock samples) was due to decay of now extinct Hf-182, it would require a very early time of formation and differentiation of the lunar crust-mantle system (with high Hf/W ratios) during the earliest stages of Earth s accretion. This result was both surprising and difficult to understand. The ability to explain these results by a more plausible mechanism is therefore very attractive. In a recent report Lee et al. (2002) showed that there were excesses of W-182 and that epsilon(sub W-182) was correlated with the Ta/W ratios in the mineral phases of individual lunar rock samples. This is in accord with W-182 variations in lunar samples being produced by cosmic-ray induced neutron capture on Ta-182.
Metadynamics for training neural network model chemistries: A competitive assessment
NASA Astrophysics Data System (ADS)
Herr, John E.; Yao, Kun; McIntyre, Ryker; Toth, David W.; Parkhill, John
2018-06-01
Neural network model chemistries (NNMCs) promise to facilitate the accurate exploration of chemical space and simulation of large reactive systems. One important path to improving these models is to add layers of physical detail, especially long-range forces. At short range, however, these models are data driven and data limited. Little is systematically known about how data should be sampled, and "test data" chosen randomly from some sampling techniques can provide poor information about generality. If the sampling method is narrow, "test error" can appear encouragingly tiny while the model fails catastrophically elsewhere. In this manuscript, we competitively evaluate two common sampling methods: molecular dynamics (MD), normal-mode sampling, and one uncommon alternative, Metadynamics (MetaMD), for preparing training geometries. We show that MD is an inefficient sampling method in the sense that additional samples do not improve generality. We also show that MetaMD is easily implemented in any NNMC software package with cost that scales linearly with the number of atoms in a sample molecule. MetaMD is a black-box way to ensure samples always reach out to new regions of chemical space, while remaining relevant to chemistry near kbT. It is a cheap tool to address the issue of generalization.
[Syagrus romanzoffiana (Arecaceae) seed utilization by ants in a secondary forest in South Brazil].
Silva, Fernanda R; Begnini, Romualdo M; Klier, Vinícius A; Scherer, Karla Z; Lopes, Benedito C; Castellani, Tânia T
2009-01-01
Ants can nest in a wide variety of substracts. This paper shows Syagrus romanzoffiana seed utilization by ants in an Atlantic secondary forest. We report 29 seeds occupied by small-bodied ants, with 27 of them showing at least two ant development stages. Although a large number of seeds were sampled, a low level of ant occupation was observed.
The cost of large numbers of hypothesis tests on power, effect size and sample size.
Lazzeroni, L C; Ray, A
2012-01-01
Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.
Robust estimation of microbial diversity in theory and in practice
Haegeman, Bart; Hamelin, Jérôme; Moriarty, John; Neal, Peter; Dushoff, Jonathan; Weitz, Joshua S
2013-01-01
Quantifying diversity is of central importance for the study of structure, function and evolution of microbial communities. The estimation of microbial diversity has received renewed attention with the advent of large-scale metagenomic studies. Here, we consider what the diversity observed in a sample tells us about the diversity of the community being sampled. First, we argue that one cannot reliably estimate the absolute and relative number of microbial species present in a community without making unsupported assumptions about species abundance distributions. The reason for this is that sample data do not contain information about the number of rare species in the tail of species abundance distributions. We illustrate the difficulty in comparing species richness estimates by applying Chao's estimator of species richness to a set of in silico communities: they are ranked incorrectly in the presence of large numbers of rare species. Next, we extend our analysis to a general family of diversity metrics (‘Hill diversities'), and construct lower and upper estimates of diversity values consistent with the sample data. The theory generalizes Chao's estimator, which we retrieve as the lower estimate of species richness. We show that Shannon and Simpson diversity can be robustly estimated for the in silico communities. We analyze nine metagenomic data sets from a wide range of environments, and show that our findings are relevant for empirically-sampled communities. Hence, we recommend the use of Shannon and Simpson diversity rather than species richness in efforts to quantify and compare microbial diversity. PMID:23407313
Wu, Baolin
2006-02-15
Differential gene expression detection and sample classification using microarray data have received much research interest recently. Owing to the large number of genes p and small number of samples n (p > n), microarray data analysis poses big challenges for statistical analysis. An obvious problem owing to the 'large p small n' is over-fitting. Just by chance, we are likely to find some non-differentially expressed genes that can classify the samples very well. The idea of shrinkage is to regularize the model parameters to reduce the effects of noise and produce reliable inferences. Shrinkage has been successfully applied in the microarray data analysis. The SAM statistics proposed by Tusher et al. and the 'nearest shrunken centroid' proposed by Tibshirani et al. are ad hoc shrinkage methods. Both methods are simple, intuitive and prove to be useful in empirical studies. Recently Wu proposed the penalized t/F-statistics with shrinkage by formally using the (1) penalized linear regression models for two-class microarray data, showing good performance. In this paper we systematically discussed the use of penalized regression models for analyzing microarray data. We generalize the two-class penalized t/F-statistics proposed by Wu to multi-class microarray data. We formally derive the ad hoc shrunken centroid used by Tibshirani et al. using the (1) penalized regression models. And we show that the penalized linear regression models provide a rigorous and unified statistical framework for sample classification and differential gene expression detection.
Thomson, William Murray; Malden, Penelope Elizabeth
2011-09-01
To examine the properties, validity and responsiveness of the Family Impact Scale in a consecutive clinical sample of patients undergoing dental treatment under general anaesthesia. A consecutive clinical sample of parents/caregivers of children receiving dental treatment under general anaesthesia provided data using the Family Impact Scale (FIS) component of the COHQOL(©) Questionnaire. The first questionnaire was completed before treatment, the follow-up questionnaire 1-4 weeks afterward. Treatment-associated changes in the FIS and its components were determined by comparing baseline and follow-up data. Baseline and follow-up data were obtained for 202 and 130 participants, respectively (64.4% follow-up). All FIS items showed large relative decreases in prevalence, the greatest seen in those relating to having sleep disrupted, blaming others, being upset, the child requiring more attention, financial difficulties and having to take time off work. Factor analysis largely confirmed the underlying factor structure, with three sub-scales (parental/family, parental emotions and family conflict) identified. The parental/family and parental emotions sub-scales showed the greatest treatment-associated improvement, with large effect sizes. There was a moderate improvement in scores on the family conflict sub-scale. The overall FIS showed a large improvement. Treating children with severe caries under general anaesthesia results in OHRQoL improvements for the family. Severe dental caries is not merely a restorative and preventive challenge for those who treat children; it has far-reaching effects on those who share the household and care for the affected child.
Minimal-assumption inference from population-genomic data
NASA Astrophysics Data System (ADS)
Weissman, Daniel; Hallatschek, Oskar
Samples of multiple complete genome sequences contain vast amounts of information about the evolutionary history of populations, much of it in the associations among polymorphisms at different loci. Current methods that take advantage of this linkage information rely on models of recombination and coalescence, limiting the sample sizes and populations that they can analyze. We introduce a method, Minimal-Assumption Genomic Inference of Coalescence (MAGIC), that reconstructs key features of the evolutionary history, including the distribution of coalescence times, by integrating information across genomic length scales without using an explicit model of recombination, demography or selection. Using simulated data, we show that MAGIC's performance is comparable to PSMC' on single diploid samples generated with standard coalescent and recombination models. More importantly, MAGIC can also analyze arbitrarily large samples and is robust to changes in the coalescent and recombination processes. Using MAGIC, we show that the inferred coalescence time histories of samples of multiple human genomes exhibit inconsistencies with a description in terms of an effective population size based on single-genome data.
NASA Astrophysics Data System (ADS)
Starobor, Aleksey; Palashov, Oleg
2018-04-01
Thermal effects in terbium aluminum garnet (TAG) ceramics (thermal lens and thermally induced depolarization) doped with silicon and titanium were investigated in temperature range of 79-293K. Samples with low dopant concentrations shows decreasing of negative thermal effects with cooling to 79 K. However for most part of samples thermal depolarization starts increasing after initial decreasing with cooling. Apparently it is connected with defects in media. Best sample (0.4 at% of Si) as pure TAG shows monotonous decreasing of thermally induced depolarization and 3.5 times Verdet constant increasing with cooling to 79 K, that leads to 1.8-times advantage over common magnetooptical media - terbium gallium garnet. It allows to provide an isolation of 30 dB at a radiation power of more than 6 kW as estimated. However, the procedure for creating ceramics samples obviously needs improvement because of the large scatter in the quality of the samples.
Baird, Andrew J; Haslam, Roger A
2013-12-01
Beliefs, cognitions, and behaviors relating to pain can be associated with a range of negative outcomes. In patients, certain beliefs are associated with increased levels of pain and related disability. There are few data, however, showing the extent to which beliefs of patients differ from those of the general population. This study explored pain beliefs in a large nonclinical population and a chronic low back pain (CLBP) sample using the Pain Beliefs Questionnaire (PBQ) to identify differences in scores and factor structures between and within the samples. This was a cross-sectional study. The samples comprised patients attending a rehabilitation program and respondents to a workplace survey. Pain beliefs were assessed using the PBQ, which incorporates 2 scales: organic and psychological. Exploratory factor analysis was used to explore variations in factor structure within and between samples. The relationship between the 2 scales also was examined. Patients reported higher organic scores and lower psychological scores than the nonclinical sample. Within the nonclinical sample, those who reported frequent pain scored higher on the organic scale than those who did not. Factor analysis showed variations in relation to the presence of pain. The relationship between scales was stronger in those not reporting frequent pain. This was a cross-sectional study; therefore, no causal inferences can be made. Patients experiencing CLBP adopt a more biomedical perspective on pain than nonpatients. The presence of pain is also associated with increased biomedical thinking in a nonclinical sample. However, the impact is not only on the strength of beliefs, but also on the relationship between elements of belief and the underlying belief structure.
Low Zika virus seroprevalence among pregnant women in North Central Nigeria, 2016.
Mathé, Philipp; Egah, Daniel Z; Müller, Janis A; Shehu, Nathan Y; Obishakin, Emmanuel T; Shwe, David D; Pam, Victor C; Okolo, Mark O; Yilgwan, Christopher; Gomerep, Simji S; Fuchs, Jonas; Abok, Ibrahim; Onyedibe, Kenneth I; Olugbo, Ewa J; Isa, Samson E; Machunga-Mambula, Salamatu S; Attah, Caleb J; Münch, Jan; Oguche, Stephen; Panning, Marcus
2018-05-26
Zika virus (ZIKV) has been known for decades in Africa but contemporary data is lacking at large. To describe the seroepidemiology of ZIKV in North Central Nigeria. We performed a cross-sectional study at six health care facilities in North Central Nigeria from January to December 2016. Detection of ZIKV antibodies was done using an anti-ZIKV recombinant non-structural protein 1 (NS1)-based ELISA. A colorimetric assay to detect ZIKV neutralizing antibodies was used on ELISA reactive and randomly selected ELISA non-reactive samples. ZIKV real-time RT-PCR was done on a subset of samples. A total of 468 individual samples were included with almost 60% from pregnant women. Using NS1-based ELISA, an anti-ZIKV positive rate of 6% for IgM and 4% for IgG was found. Pregnant women showed anti-ZIKV positive rates of 4% for IgM and 3% for IgG. None of the ZIKV antibody positive samples tested ZIKV RT-PCR positive. An association with male sex was found for anti-ZIKV IgG ELISA positivity (prevalence ratio 3.49; 95% confidence interval: 1.48-8.25; p = .004). No association with pregnancy, yellow fever vaccination or malaria was found for anti-ZIKV IgM or IgG positivity. ZIKV neutralizing antibodies were detected in 17/18 (94%) anti-ZIKV NS1 positive/borderline samples and in one sample without detectable ZIKV NS1 antibodies. Partial ZIKV E gene sequence was retrieved in one sample without ZIKV antibodies, which clustered within the West African ZIKV lineage. Our results show a largely ZIKV immunologically naïve population and reinforce the importance of ZIKV surveillance in Africa. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Shang, H.; Chen, L.; Bréon, F.-M.; Letu, H.; Li, S.; Wang, Z.; Su, L.
2015-07-01
The principles of the Polarization and Directionality of the Earth's Reflectance (POLDER) cloud droplet size retrieval requires that clouds are horizontally homogeneous. Nevertheless, the retrieval is applied by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using the POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval, and then analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-scale variability in droplet effective radius (CDR) can mislead both the CDR and effective variance (EV) retrievals. Nevertheless, the sub-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval is accurate using limited observations and is largely independent of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, the measurements in the primary rainbow region (137-145°) are used to ensure accurate large droplet (> 15 μm) retrievals and reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data for June 2008, the new CDR results are compared with the operational CDRs. The comparison show that the operational CDRs tend to be underestimated for large droplets. The reason is that the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Lastly, a sub-scale retrieval case is analyzed, illustrating that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size parameters from POLDER measurements.
"A Richness Study of 14 Distant X-Ray Clusters from the 160 Square Degree Survey"
NASA Technical Reports Server (NTRS)
Jones, Christine; West, Donald (Technical Monitor)
2001-01-01
We have measured the surface density of galaxies toward 14 X-ray-selected cluster candidates at redshifts z(sub i) 0.46, and we show that they are associated with rich galaxy concentrations. These clusters, having X-ray luminosities of Lx(0.5-2 keV) approx. (0.5 - 2.6) x 10(exp 44) ergs/ sec are among the most distant and luminous in our 160 deg(exp 2) ROSAT Position Sensitive Proportional Counter cluster survey. We find that the clusters range between Abell richness classes 0 and 2 and have a most probable richness class of 1. We compare the richness distribution of our distant clusters to those for three samples of nearby clusters with similar X-ray luminosities. We find that the nearby and distant samples have similar richness distributions, which shows that clusters have apparently not evolved substantially in richness since redshift z=0.5. There is, however, a marginal tendency for the distant clusters to be slightly poorer than nearby clusters, although deeper multicolor data for a large sample would be required to confirm this trend. We compare the distribution of distant X-ray clusters in the L(sub X)-richness plane to the distribution of optically selected clusters from the Palomar Distant Cluster Survey. The optically selected clusters appear overly rich for their X-ray luminosities, when compared to X-ray-selected clusters. Apparently, X-ray and optical surveys do not necessarily sample identical mass concentrations at large redshifts. This may indicate the existence of a population of optically rich clusters with anomalously low X-ray emission, More likely, however, it reflects the tendency for optical surveys to select unvirialized mass concentrations, as might be expected when peering along large-scale filaments.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
Temporally-stable active precision mount for large optics.
Reinlein, Claudia; Damm, Christoph; Lange, Nicolas; Kamm, Andreas; Mohaupt, Matthias; Brady, Aoife; Goy, Matthias; Leonhard, Nina; Eberhardt, Ramona; Zeitner, Uwe; Tünnermann, Andreas
2016-06-13
We present a temporally-stable active mount to compensate for manufacturing-induced deformations of reflective optical components. In this paper, we introduce the design of the active mount, and its evaluation results for two sample mirrors: a quarter mirror of 115 × 105 × 9 mm3, and a full mirror of 228 × 210 × 9 mm3. The quarter mirror with 20 actuators shows a best wavefront error rms of 10 nm. Its installation position depending deformations are addressed by long-time measurements over 14 weeks indicating no significance of the orientation. Size-induced differences of the mount are studied by a full mirror with 80 manual actuators arranged in the same actuator pattern as the quarter mirror. This sample shows a wavefront error rms of (27±2) nm over a measurement period of 46 days. We conclude that the developed mount is suitable to compensate for manufacturing-induced deformations of large reflective optics, and likely to be included in the overall systems alignment procedure.
Minimum Sobolev norm interpolation of scattered derivative data
NASA Astrophysics Data System (ADS)
Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.
2018-07-01
We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.
Niama, Fabien Roch; Vidal, Nicole; Diop-Ndiaye, Halimatou; Nguimbi, Etienne; Ahombo, Gabriel; Diakabana, Philippe; Bayonne Kombo, Édith Sophie; Mayengue, Pembe Issamou; Kobawila, Simon-Charles; Parra, Henri Joseph; Toure-Kane, Coumba
2017-07-05
In this work, we investigated the genetic diversity of HIV-1 and the presence of mutations conferring antiretroviral drug resistance in 50 drug-naïve infected persons in the Republic of Congo (RoC). Samples were obtained before large-scale access to HAART in 2002 and 2004. To assess the HIV-1 genetic recombination, the sequencing of the pol gene encoding a protease and partial reverse transcriptase was performed and analyzed with updated references, including newly characterized CRFs. The assessment of drug resistance was conducted according to the WHO protocol. Among the 50 samples analyzed for the pol gene, 50% were classified as intersubtype recombinants, charring complex structures inside the pol fragment. Five samples could not be classified (noted U). The most prevalent subtypes were G with 10 isolates and D with 11 isolates. One isolate of A, J, H, CRF05, CRF18 and CRF37 were also found. Two samples (4%) harboring the mutations M230L and Y181C associated with the TAMs M41L and T215Y, respectively, were found. This first study in the RoC, based on WHO classification, shows that the threshold of transmitted drug resistance before large-scale access to antiretroviral therapy is 4%.
Li, Quanfeng; Wang, Qi; Hou, Yubin; Lu, Qingyou
2012-04-01
We present a home-built 18/20 T high magnetic field scanning tunneling microscope (STM) featuring fully low voltage (lower than ±15 V) operability in low temperatures, large scale searching ability, and 20 fA high current resolution (measured by using a 100 GOhm dummy resistor to replace the tip-sample junction) with a bandwidth of 3.03 kHz. To accomplish low voltage operation which is important in achieving high precision, low noise, and low interference with the strong magnetic field, the coarse approach is implemented with an inertial slider driven by the lateral bending of a piezoelectric scanner tube (PST) whose inner electrode is axially split into two for enhanced bending per volt. The PST can also drive the same sliding piece to inertial slide in the other bending direction (along the sample surface) of the PST, which realizes the large area searching ability. The STM head is housed in a three segment tubular chamber, which is detachable near the STM head for the convenience of sample and tip changes. Atomic resolution images of a graphite sample taken under 17.6 T and 18.0001 T are presented to show its performance. © 2012 American Institute of Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taran, Subhrangsu, E-mail: ami.subhra@gmail.com; Sun, C. P.; Yang, H. D.
A detail study of transport and magnetic properties of La{sub 1-x}Li{sub x}MnO{sub 3+δ} (0.05 ≤ x ≤ 0.3) system synthesized by wet-chemical mixing route has been done. The room temperature x-ray powder diffraction (XRD) data show single phase behavior of all samples except x = 0.3. Rietveld refinement of XRD data shows structural transition from rhombohedral (R3-C) to orthorhombic (Pnma) symmetry occurs at the Li-doping level x > 0.2 with both the lattice parameter and unit-cell volume decrease with increase of ‘x’. All the samples show ferromagnetic (FM) behavior while metallic behavior are shown by the samples up to Li-concentrationmore » x = 0.2. With further Li doping i.e. for x = 0.25, the sample shows insulating behavior accompanied by charge-order transition around T ~ 225 K. Metallic part of the resistivity data of the samples is best fitted with an expression ρ(T) = ρ{sub 0} + ρ{sub 4.5}T{sup 4.5} + C/ sinh{sup 2}(hv{sub s}/2k{sub B}T) containing small-polaron contribution (last term). Most interesting finding in the present study is the observation of large anomalous decrease in thermoelectric power (S) below 100 K shown by the sample with x = 0.25. Probable mechanisms responsible for the observed colossal thermoelectric power have been discussed.« less
The multiple imputation method: a case study involving secondary data analysis.
Walani, Salimah R; Cleland, Charles M
2015-05-01
To illustrate with the example of a secondary data analysis study the use of the multiple imputation method to replace missing data. Most large public datasets have missing data, which need to be handled by researchers conducting secondary data analysis studies. Multiple imputation is a technique widely used to replace missing values while preserving the sample size and sampling variability of the data. The 2004 National Sample Survey of Registered Nurses. The authors created a model to impute missing values using the chained equation method. They used imputation diagnostics procedures and conducted regression analysis of imputed data to determine the differences between the log hourly wages of internationally educated and US-educated registered nurses. The authors used multiple imputation procedures to replace missing values in a large dataset with 29,059 observations. Five multiple imputed datasets were created. Imputation diagnostics using time series and density plots showed that imputation was successful. The authors also present an example of the use of multiple imputed datasets to conduct regression analysis to answer a substantive research question. Multiple imputation is a powerful technique for imputing missing values in large datasets while preserving the sample size and variance of the data. Even though the chained equation method involves complex statistical computations, recent innovations in software and computation have made it possible for researchers to conduct this technique on large datasets. The authors recommend nurse researchers use multiple imputation methods for handling missing data to improve the statistical power and external validity of their studies.
Golemba, Marcelo D; Culasso, Andrés C A; Villamil, Federico G; Bare, Patricia; Gadano, Adrián; Ridruejo, Ezequiel; Martinez, Alfredo; Di Lello, Federico A; Campos, Rodolfo H
2013-01-01
The estimated prevalence of HCV infection in Argentina is around 2%. However, higher rates of infection have been described in population studies of small urban and rural communities. The aim of this work was to compare the origin and diversification of HCV-1b in samples from two different epidemiological scenarios: Buenos Aires, a large cosmopolitan city, and O'Brien, a small rural town with a high prevalence of HCV infection. The E1/E2 and NS5B regions of the viral genome from 83 patients infected with HCV-1b were sequenced. Phylogenetic analysis and Bayesian Coalescent methods were used to study the origin and diversification of HCV-1b in both patient populations. Samples from Buenos Aires showed a polyphyletic behavior with a tMRCA around 1887-1900 and a time of spread of infection approximately 60 years ago. In contrast, samples from ÓBrien showed a monophyletic behavior with a tMRCA around 1950-1960 and a time of spread of infection more recent than in Buenos Aires, around 20-30 years ago. Phylogenetic and coalescence analysis revealed a different behavior in the epidemiological histories of Buenos Aires and ÓBrien. HCV infection in Buenos Aires shows a polyphyletic behavior and an exponential growth in two phases, whereas that in O'Brien shows a monophyletic cluster and an exponential growth in one single step with a more recent tMRCA. The polyphyletic origin and the probability of encountering susceptible individuals in a large cosmopolitan city like Buenos Aires are in agreement with a longer period of expansion. In contrast, in less populated areas such as O'Brien, the chances of HCV transmission are strongly restricted. Furthermore, the monophyletic character and the most recent time of emergence suggest that different HCV-1b ancestors (variants) that were in expansion in Buenos Aires had the opportunity to colonize and expand in O'Brien.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varadwaj, K.S.K.; Panigrahi, M.K.; Ghose, J.
2004-11-01
Diol capped {gamma}-Fe{sub 2}O{sub 3} nanoparticles are prepared from ferric nitrate by refluxing in 1,4-butanediol (9.5nm) and 1,5-pentanediol (15nm) and uncapped particles are prepared by refluxing in 1,2-propanediol followed by sintering the alkoxide formed. X-ray diffraction (XRD) shows that all the samples have the spinel phase. Raman spectroscopy shows that the samples prepared in 1,4-butanediol and 1,5-pentanediol and 1,2-propanediol (sintered at 573 and 673K) are {gamma}-Fe{sub 2}O{sub 3} and the 773K-sintered sample is Fe{sub 3}O{sub 4}. Raman laser studies carried out at various laser powers show that all the samples undergo laser-induced degradation to {alpha}-Fe{sub 2}O{sub 3} at higher lasermore » power. The capped samples are however, found more stable to degradation than the uncapped samples. The stability of {gamma}-Fe{sub 2}O{sub 3} sample with large particle size (15.4nm) is more than the sample with small particle size (10.2nm). Fe{sub 3}O{sub 4} having a particle size of 48nm is however less stable than the smaller {gamma}-Fe{sub 2}O{sub 3} nanoparticles.« less
Moreno, Teresa; Merolla, Luciano; Gibbons, Wes; Greenwell, Leona; Jones, Tim; Richards, Roy
2004-10-15
Atmospheric aerosol samples were collected during different prevailing wind directions from a site located close to a busy motorway, a major steelworks, and the town of Port Talbot (Wales, UK). A high-volume collector was used (1100 l/min), enabling relatively large amounts of particulate matter (PM(10-2.5) and PM(2.5)) samples to be obtained on a polyurethane foam [PUF, H(2)N-C(O)O-CH(2)CH(3)] substrate over periods of 2-7 days. Four samples were chosen to exemplify different particle mixtures: SE- and NE-derived samples for particles moving along and across the motorway, a NW-derived sample from the town, and a mixed SW/SE-derived sample containing a mixture of particles from both steelworks and motorway. The latter sample showed the highest average collection rate (0.9 mg/h, 13 microg/m(3)) and included a prominent pollution episode when rainy winds were blowing from the direction of the steelworks. Both NW and SE samples were collected under dry conditions and show the same collection rate (0.7 mg/h, 10 microg/m(3)), whereas the NE sample was collected during wetter weather and shows the lowest rate (0.3 mg/h, 5 microg/m(3)). Scanning electron microscopy (SEM) and energy-dispersive X-ray microanalysis system (EDX) analyses show all samples are dominated by elemental and organic carbon compounds (EOCC) and nitrates, with lesser amounts of sulphates, felsic silicates, chlorides and metals. ICP-MS analyses show the SW/SE sample to be richest in metals, especially Fe, Zn, Ni, and Mn, these being attributed to an origin from the steelworks. The SE sample, blown along the motorway corridor, shows enhanced levels of Pb, V, Ti, As, and Ce, these metals being interpreted as defining a traffic-related chemical fingerprint. The NW sample shows a very low metal content. DNA plasmid assay data on the samples show TM(50) values varying from 66 to 175 microg/ml for the adjusted whole sample and 89 to 203 microg/ml for the soluble fraction. The SW/SE-mixed metalliferous sample is the most bioreactive (both whole and soluble) and the soluble fraction of the metal-depleted NW sample is the least bioreactive. The metal content of the aerosol samples, especially soluble metals such as Zn, is suggested to be the primary component responsible for oxidative damage of the DNA, and therefore most implicated in any health effects arising from the inhalation of these particulate cocktails.
NASA Astrophysics Data System (ADS)
Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.
2016-12-01
The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.
Bejarano, Adriana C; Michel, Jacqueline
2010-05-01
A large-scale assessment of polycyclic aromatic hydrocarbons (PAHs) from the 1991 Gulf War oil spill was performed for 2002-2003 sediment samples (n = 1679) collected from habitats along the shoreline of Saudi Arabia. Benthic sediment toxicity was characterized using the Equilibrium Partitioning Sediment Benchmark Toxic Unit approach for 43 PAHs (ESBTU(FCV,43)). Samples were assigned to risk categories according to ESBTU(FCV,43) values: no-risk (< or = 1), low (>1 - < or = 2), low-medium (>2 - < or = 3), medium (>3 - < or = 5) and high-risk (>5). Sixty seven percent of samples had ESBTU(FCV,43) > 1 indicating potential adverse ecological effects. Sediments from the 0-30 cm layer from tidal flats, and the >30 - <60 cm layer from heavily oiled halophytes and mangroves had high frequency of high-risk samples. No-risk samples were characterized by chrysene enrichment and depletion of lighter molecular weight PAHs, while high-risk samples showed little oil weathering and PAH patterns similar to 1993 samples. North of Safaniya sediments were not likely to pose adverse ecological effects contrary to sediments south of Tanaqib. Landscape and geomorphology has played a role on the distribution and persistence in sediments of oil from the Gulf War. Copyright 2009 Elsevier Ltd. All rights reserved.
Wood, Henry M; Belvedere, Ornella; Conway, Caroline; Daly, Catherine; Chalkley, Rebecca; Bickerdike, Melissa; McKinley, Claire; Egan, Phil; Ross, Lisa; Hayward, Bruce; Morgan, Joanne; Davidson, Leslie; MacLennan, Ken; Ong, Thian K; Papagiannopoulos, Kostas; Cook, Ian; Adams, David J; Taylor, Graham R; Rabbitts, Pamela
2010-08-01
The use of next-generation sequencing technologies to produce genomic copy number data has recently been described. Most approaches, however, reply on optimal starting DNA, and are therefore unsuitable for the analysis of formalin-fixed paraffin-embedded (FFPE) samples, which largely precludes the analysis of many tumour series. We have sought to challenge the limits of this technique with regards to quality and quantity of starting material and the depth of sequencing required. We confirm that the technique can be used to interrogate DNA from cell lines, fresh frozen material and FFPE samples to assess copy number variation. We show that as little as 5 ng of DNA is needed to generate a copy number karyogram, and follow this up with data from a series of FFPE biopsies and surgical samples. We have used various levels of sample multiplexing to demonstrate the adjustable resolution of the methodology, depending on the number of samples and available resources. We also demonstrate reproducibility by use of replicate samples and comparison with microarray-based comparative genomic hybridization (aCGH) and digital PCR. This technique can be valuable in both the analysis of routine diagnostic samples and in examining large repositories of fixed archival material.
Speckle in the diffraction patterns of Hendricks-Teller and icosahedral glass models
NASA Technical Reports Server (NTRS)
Garg, Anupam; Levine, Dov
1988-01-01
It is shown that the X-ray diffraction patterns from the Hendricks-Teller model for layered systems and the icosahedral glass models for the icosahedral phases show large fluctuations between nearby scattering wave vectors and from sample to sample, that are quite analogous to laser speckle. The statistics of these fluctuations are studied analytically for the first model and via computer simulations for the second. The observability of these effects is discussed briefly.
NASA Astrophysics Data System (ADS)
Peyton, S. L.; Reiners, P. W.
2007-12-01
We dated borehole and surface samples from the Wind River and Beartooth Laramide-age, basement-cored uplifts of the Rocky Mountain foreland using the apatite (U-Th)/He (AHe) system. Comparison of these results to previously published apatite fission-track (AFT) data along with the incorporation of new He diffusion models (Shuster et al., 2006), reveals several new insights into, and poses new interpretational challenges for, the shallow exhumation histories of these ranges. Deep (2.2-2.8 km below surface) borehole samples from the Wind River Range have AHe ages of 9-12 Ma, and suggest at least 600 m of rapid exhumation during the Miocene. Shallower samples range from 35-66 Ma and are consistent with exhumation of a fossil partial retention zone. Previously-published apatite fission track (AFT) data from the same borehole show at least 2 km of rapid exhumation at ~45-38 Ma at depths where AHe ages are 9-50 Ma. This contrasts with the AHe ages which show slow exhumation between 12-66 Ma and have a trend on an age-elevation plot that appears to cut across the AFT age trend. Forward modeling of the cooling ages of these data using well-constrained thermal histories and conventional Durango apatite He diffusion data cannot explain these coupled AFT-AHe age-elevation relationships. However, modeling using diffusion kinetics of the Shuster et al. radiation-damage trapping model can explain the observed age trends, including the apparent presence of a 45-38 Ma exhumation event in the AFT data and its absence in the AHe data. In the model the shallow samples do not reach high enough temperatures for annealing of accumulated radiation damage, so He is trapped and ages are much older than predicted by conventional diffusion models. Previously-published AFT data from the Beartooth Range also show a large Laramide-age exhumation event, dated at 57-52 Ma. Similar to our observations from the Wind River Range, this event is not represented in our AHe results from borehole samples, which instead show slow cooling between at least 63-10 Ma. The trapping model predicts that the observed AHe age of a single apatite grain will be proportional to its effective Uranium content (eU), a proxy for radiation damage. Multiple single-grain replicates from a sample from the Wind River borehole are consistent with this, showing a strong correlation with eU. Although the trapping-diffusion model explains the coupled AFT-AHe data of borehole samples, surface samples from the Fremont Peak area in the Wind River Range have AHe ages that are older than the corresponding previously-published AFT ages over the 1.2 km elevation traverse sampled. AFT ages show ~1 km of rapid exhumation at ~62-58 Ma; corresponding AHe ages are as much as 20 Myr older. Although the radiation damage trapping model predicts that some AHe ages may be older than the corresponding AFT ages, thermal- diffusion forward models cannot explain these large age differences over such a large sampling interval, even if trapping model kinetic parameters are varied by 5%. Thus, discrepancies in AFT and AHe ages of these surficial samples remain problematic. The thermal histories required to approximate the borehole data require burial up to the end of the Cretaceous of ~3-4 km followed by at least two phases of cooling and exhumation. The first and larger cooling event of several tens of degrees (~3-4 km of exhumation) occurred during the Paleocene-Eocene, followed by a smaller cooling event of a few tens of degrees (~1 km of exhumation) during the Miocene.
Whole genome amplification and real-time PCR in forensic casework
Giardina, Emiliano; Pietrangeli, Ilenia; Martone, Claudia; Zampatti, Stefania; Marsala, Patrizio; Gabriele, Luciano; Ricci, Omero; Solla, Gianluca; Asili, Paola; Arcudi, Giovanni; Spinella, Aldo; Novelli, Giuseppe
2009-01-01
Background WGA (Whole Genome Amplification) in forensic genetics can eliminate the technical limitations arising from low amounts of genomic DNA (gDNA). However, it has not been used to date because any amplification bias generated may complicate the interpretation of results. Our aim in this paper was to assess the applicability of MDA to forensic SNP genotyping by performing a comparative analysis of genomic and amplified DNA samples. A 26-SNPs TaqMan panel specifically designed for low copy number (LCN) and/or severely degraded genomic DNA was typed on 100 genomic as well as amplified DNA samples. Results Aliquots containing 1, 0.1 and 0.01 ng each of 100 DNA samples were typed for a 26-SNPs panel. Similar aliquots of the same DNA samples underwent multiple displacement amplification (MDA) before being typed for the same panel. Genomic DNA samples showed 0% PCR failure rate for all three dilutions, whilst the PCR failure rate of the amplified DNA samples was 0% for the 1 ng and 0.1 ng dilutions and 0.077% for the 0.01 ng dilution. The genotyping results of both the amplified and genomic DNA samples were also compared with reference genotypes of the same samples obtained by direct sequencing. The genomic DNA samples showed genotype concordance rates of 100% for all three dilutions while the concordance rates of the amplified DNA samples were 100% for the 1 ng and 0.1 ng dilutions and 99.923% for the 0.01 ng dilution. Moreover, ten artificially-degraded DNA samples, which gave no results when analyzed by current forensic methods, were also amplified by MDA and genotyped with 100% concordance. Conclusion We investigated the suitability of MDA material for forensic SNP typing. Comparative analysis of amplified and genomic DNA samples showed that a large number of SNPs could be accurately typed starting from just 0.01 ng of template. We found that the MDA genotyping call and accuracy rates were only slightly lower than those for genomic DNA. Indeed, when 10 pg of input DNA was used in MDA, we obtained 99.923% concordance, indicating a genotyping error rate of 1/1299 (7.7 × 10-4). This is quite similar to the genotyping error rate of STRs used in current forensic analysis. Such efficiency and accuracy of SNP typing of amplified DNA suggest that MDA can also generate large amounts of genome-equivalent DNA from a minimal amount of input DNA. These results show for the first time that MDA material is suitable for SNP-based forensic protocols and in general when samples fail to give interpretable STR results. PMID:19366436
Onga, Chie; Nakashima, Satoru
2014-01-01
Visible darkfield reflectance spectroscopy equipped with a color mapping system has been developed and applied to a brown-colored Rokko granite sample. Sample reflectance spectra converted to Kubelka-Munk (KM) spectra show similar features to goethite and lepidocrocite. Raman microspectroscopy on the granite sample surface confirms the presence of these minerals. Here, L*a*b* color values (second Commission Internationale d'Eclairage [CIELab] 1976 color space) were determined from the sample reflection spectra. Grey, yellow, and brown zones of the granite show different L*, a*, and b* values. In the a*-b* diagram, a* and b* values in the grey and brown zones are on the lepidocrocite/ferrihydrite trends, but their values in the brown zone are larger than those in the grey zone. The yellow zone shows data points close to the goethite trend. Iron (hydr)oxide-rich areas can be visualized by means of large a* and b* values in the L*, a*, and b* maps. Although the present method has some problems and limitations, the visible darkfield reflectance spectroscopy can be a useful method for colored-material characterization.
NASA Astrophysics Data System (ADS)
Xu, Xiang; Mi, Gaoyang; Luo, Yuanqing; Jiang, Ping; Shao, Xinyu; Wang, Chunming
2017-07-01
Laser metal deposition (LMD) with a filler has been demonstrated to be an effective method for additive manufacturing because of its high material deposition efficiency, improved surface quality, reduced material wastage, and cleaner process environment without metal dust pollution. In this study, single beads and samples with ten layers were successfully deposited on a 316 L stainless steel surface under optimized conditions using a 4000 W continuous wave fibre laser and an arc welding machine. The results showed that satisfactory layered samples with a large deposition height and smooth side surface could be achieved under appropriate parameters. The uniform structures had fine cellular and network austenite grains with good metallurgical bonding between layers, showing an austenite solidification mode. Precipitated ferrite at the grain boundaries showed a subgrain structure with fine uniform grain size. A higher microhardness (205-226 HV) was detected in the middle of the deposition area, while the tensile strength of the 50 layer sample reached 669 MPa. In addition, ductile fracturing was proven by the emergence of obvious dimples at the fracture surface.
Relapsed neuroblastomas show frequent RAS-MAPK pathway mutations | Office of Cancer Genomics
The majority of patients with neuroblastoma have tumors that initially respond to chemotherapy, but a large proportion will experience therapy-resistant relapses. The molecular basis of this aggressive phenotype is unknown. Whole-genome sequencing of 23 paired diagnostic and relapse neuroblastomas showed clonal evolution from the diagnostic tumor, with a median of 29 somatic mutations unique to the relapse sample. Eighteen of the 23 relapse tumors (78%) showed mutations predicted to activate the RAS-MAPK pathway.
Effect of sampling rate and record length on the determination of stability and control derivatives
NASA Technical Reports Server (NTRS)
Brenner, M. J.; Iliff, K. W.; Whitman, R. K.
1978-01-01
Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.
Volpi, Barbara; Marzilli, Eleonora; Tambelli, Renata
2018-01-01
Adolescents are the main users of new technologies and their main purpose of use is social interaction. Although new technologies are useful to teenagers, in addressing their developmental tasks, recent studies have shown that they may be an obstacle in their growth. Research shows that teenagers with Internet addiction experience lower quality in their relationships with parents and more individual difficulties. However, limited research is available on the role played by adolescents' attachment to parents and peers, considering their psychological profiles. We evaluated in a large community sample of adolescents (N = 1105) the Internet use/abuse, the adolescents' attachment to parents and peers, and their psychological profiles. Hierarchical regression analyses were conducted to verify the influence of parental and peer attachment on Internet use/abuse, considering the moderating effect of adolescents' psychopathological risk. Results showed that adolescents' attachment to parents had a significant effect on Internet use. Adolescents' psychopathological risk had a moderating effect on the relationship between attachment to mothers and Internet use. Our study shows that further research is needed, taking into account both individual and family variables. PMID:29707572
Weak antilocalization effect due to topological surface states in Bi2Se2.1Te0.9
NASA Astrophysics Data System (ADS)
Shrestha, K.; Graf, D.; Marinova, V.; Lorenz, B.; Chu, C. W.
2017-10-01
We have investigated the weak antilocalization (WAL) effect in the p-type Bi2Se2.1Te0.9 topological system. The magnetoconductance shows a cusp-like feature at low magnetic fields, indicating the presence of the WAL effect. The WAL curves measured at different tilt angles merge together when they are plotted as a function of the normal field components, showing that surface states dominate the magnetoconductance in the Bi2Se2.1Te0.9 crystal. We have calculated magnetoconductance per conduction channel and applied the Hikami-Larkin-Nagaoka formula to determine the physical parameters that characterize the WAL effect. The number of conduction channels and the phase coherence length do not change with temperature up to T = 5 K. In addition, the sample shows a large positive magnetoresistance that reaches 1900% under a magnetic field of 35 T at T = 0.33 K with no sign of saturation. The magnetoresistance value decreases with both increasing temperature and tilt angle of the sample surface with respect to the magnetic field. The large magnetoresistance of topological insulators can be utilized in future technology such as sensors and memory devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puls, R.W.; Powell, R.M.
R.S. Kerr Environmental Research Laboratory (RSKERL) personnel have evaluated sampling procedures for the collection of representative, accurate, and reproducible ground water quality samples for metals for the past four years. Intensive sampling research at three different field sites has shown that the method by which samples are collected has a greater impact on sample quality, accuracy, and reproducibility than whether the samples are filtered or not. In particular, sample collection practices that induce artifically high levels of turbidity have been shown to have the greatest negative impacts on sample quality. Results indicated the ineffectiveness of bailer for collection of representativemore » metal samples. Inconsistent operator usage together with excessive purging generally resulted in excessive turbidity and large differences in filtered and unfiltered metal samples. The use of low flow rate purging and sampling consistently produced filtered and unfiltered samples that showed no significant differences in concentrations. Turbidity levels were generally less than 5 NTUs, even in fine-textured glacial till. The authors recommend the use of low flow rates, during both purging and sampling.« less
Ryan, John Jake; Rawn, Dorothea F K
2014-09-01
Human milk samples were collected from individuals residing in various regions across Canada mostly in the years 1992 to 2005. These included five large cities in southern Canada as well as samples from Nunavik in northern Quebec. Comparative samples were also collected from residents of Austin, Texas, USA in 2002 and 2004. More than 300 milk samples were analysed for the brominated flame retardants (BFRs), PBDEs and HBCD, by extraction, purification and quantification using either isotope dilution gas chromatography-mass spectrometry (GC-MS) or liquid chromatography-MS. The Canadian total PBDE values in the years 2002-2005 show median levels of about 20μg/kg on a lipid basis; a value significantly higher than in the 1980s and 1990s. Milk samples from Inuit donors in the northern region of Nunavik were slightly lower in PBDE concentrations than those from populated regions in the south of Quebec. Milk samples from Ontario contained slightly lower amounts of PBDEs in two time periods than those from Texas. HBCD levels in most milk samples were usually less than 1ppb milk lipid and dominated by the α-isomer. This large data set of BFRs in Canadian human milk demonstrates an increase in the last few decades in human exposure to BFRs which now appears to have stabilized. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Nonlocal nonlinear refraction in Hibiscus sabdariffa with large phase shifts.
Ramírez-Martínez, D; Alvarado-Méndez, E; Trejo-Durán, M; Vázquez-Guevara, M A
2014-10-20
In this work we present a study of nonlinear optical properties in organic materials (hibiscus sabdariffa). Our results demonstrate that the medium exhibits a highly nonlocal nonlinear response. We show preliminary numerical results of the transmittance as nonlocal response by considering, simultaneously, the nonlinear absorption and refraction in media. Numerical results are accord to measurement obtained by Z- scan technique where we observe large phase shifts. We also analyze the far field diffraction ring patterns of the sample.
He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe
2007-01-01
FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.
Thermoelectic properties of CVD grown large area graphene
NASA Astrophysics Data System (ADS)
Sherehiy, Andriy
This thesis is based on experimental work on thermoelectric properties of CVD grown large area graphene. The thermoelectric power (TEP) of CVD (Chemical Vapor Deposition) grown large area graphene transferred onto a Si/SiO 2_substrate was measured by simply attaching two miniature thermocouples and a resistive heater. Availability of such large area graphene facilitates straight forward TEP measurement without the use of any microfabrication processes. All investigated graphene samples showed a positive TEP S ≈ 20 mVK in ambient conditions and saturated at a negative value as low as S ≈ -50 mVK after vacuum-annealing at 500 K in a vacuum of 10-7 Torr. The observed p-type behavior under ambient conditions is attributed to the oxygen doping, while the n-type behavior under degassed conditions is due to electron doping from SiO2 surface states. It was observed that the sign of the TEP switched from negative to positive for the degassed graphene when exposed to acceptor gases. Conversely, the TEP of vacuum-annealed graphene exposed to the donor gases became even more negative than the TEP of vacuum-annealed sample.
An analysis of the first two years of GASP data. [Global Atmospheric Sampling Program
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Nastrom, G. D.; Falconer, P. D.
1978-01-01
Distributions of mean ozone levels from the first two years of data from the NASA Global Atmospheric Sampling Program (GASP) show spatial and temporal variations in agreement with previous measurements. The standard deviations of these distributions reflect the large natural variability of ozone levels in the altitude range of the GASP measurements. Monthly mean levels of ozone below the tropopause show an annual cycle with a spring maximum which is believed to result from transport from the stratosphere. Correlations of ozone with independent meteorological parameters, and meteorological parameters obtained by the GASP systems show that this transport occurs primarily through cyclogenesis at mid-latitudes. The GASP water vapor data, analyzed with respect to the location of the tropopause, correlates well with the simultaneously obtained ozone and cloud data.
Authoritarian Parenting and Asian Adolescent School Performance: Insights from the US and Taiwan
Pong, Suet-ling; Johnston, Jamie; Chen, Vivien
2014-01-01
Our study re-examines the relationship between parenting and school performance among Asian students. We use two sources of data: wave I of the Adolescent Health Longitudinal Survey (Add Health), and waves I and II of the Taiwan Educational Panel Survey (TEPS). Analysis using Add Health reveals that the Asian-American/European-American difference in the parenting–school performance relationship is due largely to differential sample sizes. When we select a random sample of European-American students comparable to the sample size of Asian-American students, authoritarian parenting also shows no effect for European-American students. Furthermore, analysis of TEPS shows that authoritarian parenting is negatively associated with children's school achievement, while authoritative parenting is positively associated. This result for Taiwanese Chinese students is similar to previous results for European-American students in the US. PMID:24850978
Oh, Junghoon; Chang, Yun Hee; Kim, Yong-Hyun; Park, Sungjin
2016-04-28
Photocatalysts use sustainable solar light energy to trigger various catalytic reactions. Metal-free nanomaterials have been suggested as cost-effective and environmentally friendly photocatalysts. In this work, we propose thickness-controlled graphite oxide (GO) as a metal-free photocatalyst, which is produced by exfoliating thick GO particles via stirring and sonication. All GO samples exhibit photocatalytic activity for degrading an organic pollutant, rhodamine B under visible light, and the thickest sample shows the best catalytic performance. UV-vis-NIR diffuse reflectance absorption spectra indicate that thicker GO samples absorb more vis-NIR light than thinner ones. Density-functional theory calculations show that GO has a much smaller band gap than that of single-layer graphene oxide, and thus suggest that the largely-reduced band gap is responsible for this trend of light absorption.
Authoritarian Parenting and Asian Adolescent School Performance: Insights from the US and Taiwan.
Pong, Suet-Ling; Johnston, Jamie; Chen, Vivien
2010-01-01
Our study re-examines the relationship between parenting and school performance among Asian students. We use two sources of data: wave I of the Adolescent Health Longitudinal Survey (Add Health), and waves I and II of the Taiwan Educational Panel Survey (TEPS). Analysis using Add Health reveals that the Asian-American/European-American difference in the parenting-school performance relationship is due largely to differential sample sizes. When we select a random sample of European-American students comparable to the sample size of Asian-American students, authoritarian parenting also shows no effect for European-American students. Furthermore, analysis of TEPS shows that authoritarian parenting is negatively associated with children's school achievement, while authoritative parenting is positively associated. This result for Taiwanese Chinese students is similar to previous results for European-American students in the US.
Tripathi, Ashish; McNulty, Ian; Shpyrko, Oleg G
2014-01-27
Ptychographic coherent x-ray diffractive imaging is a form of scanning microscopy that does not require optics to image a sample. A series of scanned coherent diffraction patterns recorded from multiple overlapping illuminated regions on the sample are inverted numerically to retrieve its image. The technique recovers the phase lost by detecting the diffraction patterns by using experimentally known constraints, in this case the measured diffraction intensities and the assumed scan positions on the sample. The spatial resolution of the recovered image of the sample is limited by the angular extent over which the diffraction patterns are recorded and how well these constraints are known. Here, we explore how reconstruction quality degrades with uncertainties in the scan positions. We show experimentally that large errors in the assumed scan positions on the sample can be numerically determined and corrected using conjugate gradient descent methods. We also explore in simulations the limits, based on the signal to noise of the diffraction patterns and amount of overlap between adjacent scan positions, of just how large these errors can be and still be rendered tractable by this method.
Explaining Charter School Effectiveness. NBER Working Paper No. 17332
ERIC Educational Resources Information Center
Angrist, Joshua D.; Pathak, Parag A.; Walters, Christopher R.
2011-01-01
Estimates using admissions lotteries suggest that urban charter schools boost student achievement, while charter schools in other settings do not. We explore student-level and school-level explanations for these differences using a large sample of Massachusetts charter schools. Our results show that urban charter schools boost achievement well…
NASA Astrophysics Data System (ADS)
Vogel, Thomas; Perez, Danny; Junghans, Christoph
2014-03-01
We show direct formal relationships between the Wang-Landau iteration [PRL 86, 2050 (2001)], metadynamics [PNAS 99, 12562 (2002)] and statistical temperature molecular dynamics [PRL 97, 050601 (2006)], the major Monte Carlo and molecular dynamics work horses for sampling from a generalized, multicanonical ensemble. We aim at helping to consolidate the developments in the different areas by indicating how methodological advancements can be transferred in a straightforward way, avoiding the parallel, largely independent, developments tracks observed in the past.
NASA Astrophysics Data System (ADS)
Deng, J.; Lee, K. K. M.; Du, Z.; Benedetti, L. R.
2016-12-01
In situ temperature measurements in the laser-heated diamond-anvil cell (LHDAC) are among the most fundamental experiments undertaken in high-pressure science. Despite its importance, few efforts have been made to examine the alteration of thermal radiation spectra of hot samples by wavelength-dependent absorption of the sample itself together with temperature gradients within samples while laser heating and their influence on temperature measurement. For example, iron-bearing minerals show strong wavelength dependent absorption in the wavelength range used to determine temperature, which, together with temperature gradients can account for largely aliased apparent temperatures (e.g., 1200 K deviation for a 4000 K melting temperature) in some experiments obtained by fitting of detected thermal radiation intensities. As such, conclusions of melting temperatures, phase diagrams and partitioning behavior, may be grossly incorrect for these materials. In general, wavelength-dependent absorption and temperature gradients of samples are two key factors to consider in order to rigorously constrain temperatures, which have been largely ignored in previous LHDAC studies. A reevaluation of temperatures measured in recent high-profile papers will be reviewed.
NASA Astrophysics Data System (ADS)
Piazzi, L.; Bonaviri, C.; Castelli, A.; Ceccherelli, G.; Costa, G.; Curini-Galletti, M.; Langeneck, J.; Manconi, R.; Montefalcone, M.; Pipitone, C.; Rosso, A.; Pinna, S.
2018-07-01
In the Mediterranean Sea, Cystoseira species are the most important canopy-forming algae in shallow rocky bottoms, hosting high biodiverse sessile and mobile communities. A large-scale study has been carried out to investigate the structure of the Cystoseira-dominated assemblages at different spatial scales and to test the hypotheses that alpha and beta diversity of the assemblages, the abundance and the structure of epiphytic macroalgae, epilithic macroalgae, sessile macroinvertebrates and mobile macroinvertebrates associated to Cystoseira beds changed among scales. A hierarchical sampling design in a total of five sites across the Mediterranean Sea (Croatia, Montenegro, Sardinia, Tuscany and Balearic Islands) was used. A total of 597 taxa associated to Cystoseira beds were identified with a mean number per sample ranging between 141.1 ± 6.6 (Tuscany) and 173.9 ± 8.5(Sardinia). A high variability at small (among samples) and large (among sites) scale was generally highlighted, but the studied assemblages showed different patterns of spatial variability. The relative importance of the different scales of spatial variability should be considered to optimize sampling designs and propose monitoring plans of this habitat.
Luminescence petrography of lunar samples
NASA Technical Reports Server (NTRS)
1972-01-01
Light-colored metaclastic rock fragments, mainly anorthositic breccias, are dominant in the lithic clasts of rock 14321 and constitute about 25% of the Apollo 14 soils. Concentration of anorthositic breccias is less in the Apollo 15 soils, but is higher in the Front samples. The Rille edge soils are rich in basalt fragments. The Apollo 15 soils are also rich in green glasses. True anorthosites in the Hadley region were found only at the St. George Crater site. Varying degrees of metamorphism were found in the anorthositic fragments, and luminescence zonations give independent evidence of metamorphism. Compositional zoning verifies the interpretation of luminescence. Rock 14321 gives evidence of modest annealing, but the light metaclastic fragments were metamorphosed before incorporation into the rock. Reaction rimming on plagioclase results in mosaicism and preferentially affects grains. The spectral analysis of luminescence in plagioclase shows that a red-infrared emission band is present in a small fraction of plagioclase grains. Samples from trench bottoms and from beneath a large boulder were compared with surface samples. Large variations in soil composition indicate marked layering in the Apollo 15 soils.
Xu, Man K; Morin, Alexandre J S; Marsh, Herbert W; Richards, Marcus; Jones, Peter B
2016-08-01
The factorial structure of the Parental Bonding Instrument (PBI) has been frequently studied in diverse samples but no study has examined its psychometric properties from large, population-based samples. In particular, important questions have not been addressed such as the measurement invariance properties across parental and offspring gender. We evaluated the PBI based on responses from a large, representative population-based sample, using an exploratory structural equation modeling method appropriate for categorical data. Analysis revealed a three-factor structure representing "care," "overprotection," and "autonomy" parenting styles. In terms of psychometric measurement validity, our results supported the complete invariance of the PBI ratings across sons and daughters for their mothers and fathers. The PBI ratings were also robust in relation to personality and mental health status. In terms of predictive value, paternal care showed a protective effect on mental health at age 43 in sons. The PBI is a sound instrument for capturing perceived parenting styles, and is predictive of mental health in middle adulthood. © The Author(s) 2016.
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block. PMID:25258742
A robust method of thin plate spline and its application to DEM construction
NASA Astrophysics Data System (ADS)
Chen, Chuanfa; Li, Yanyan
2012-11-01
In order to avoid the ill-conditioning problem of thin plate spline (TPS), the orthogonal least squares (OLS) method was introduced, and a modified OLS (MOLS) was developed. The MOLS of TPS (TPS-M) can not only select significant points, termed knots, from large and dense sampling data sets, but also easily compute the weights of the knots in terms of back-substitution. For interpolating large sampling points, we developed a local TPS-M, where some neighbor sampling points around the point being estimated are selected for computation. Numerical tests indicate that irrespective of sampling noise level, the average performance of TPS-M can advantage with smoothing TPS. Under the same simulation accuracy, the computational time of TPS-M decreases with the increase of the number of sampling points. The smooth fitting results on lidar-derived noise data indicate that TPS-M has an obvious smoothing effect, which is on par with smoothing TPS. The example of constructing a series of large scale DEMs, located in Shandong province, China, was employed to comparatively analyze the estimation accuracies of the two versions of TPS and the classical interpolation methods including inverse distance weighting (IDW), ordinary kriging (OK) and universal kriging with the second-order drift function (UK). Results show that regardless of sampling interval and spatial resolution, TPS-M is more accurate than the classical interpolation methods, except for the smoothing TPS at the finest sampling interval of 20 m, and the two versions of kriging at the spatial resolution of 15 m. In conclusion, TPS-M, which avoids the ill-conditioning problem, is considered as a robust method for DEM construction.
Fast and Accurate Support Vector Machines on Large Scale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishnu, Abhinav; Narasimhan, Jayenthi; Holder, Larry
Support Vector Machines (SVM) is a supervised Machine Learning and Data Mining (MLDM) algorithm, which has become ubiquitous largely due to its high accuracy and obliviousness to dimensionality. The objective of SVM is to find an optimal boundary --- also known as hyperplane --- which separates the samples (examples in a dataset) of different classes by a maximum margin. Usually, very few samples contribute to the definition of the boundary. However, existing parallel algorithms use the entire dataset for finding the boundary, which is sub-optimal for performance reasons. In this paper, we propose a novel distributed memory algorithm to eliminatemore » the samples which do not contribute to the boundary definition in SVM. We propose several heuristics, which range from early (aggressive) to late (conservative) elimination of the samples, such that the overall time for generating the boundary is reduced considerably. In a few cases, a sample may be eliminated (shrunk) pre-emptively --- potentially resulting in an incorrect boundary. We propose a scalable approach to synchronize the necessary data structures such that the proposed algorithm maintains its accuracy. We consider the necessary trade-offs of single/multiple synchronization using in-depth time-space complexity analysis. We implement the proposed algorithm using MPI and compare it with libsvm--- de facto sequential SVM software --- which we enhance with OpenMP for multi-core/many-core parallelism. Our proposed approach shows excellent efficiency using up to 4096 processes on several large datasets such as UCI HIGGS Boson dataset and Offending URL dataset.« less
A Novel In-Beam Delayed Neutron Counting Technique for Characterization of Special Nuclear Materials
NASA Astrophysics Data System (ADS)
Bentoumi, G.; Rogge, R. B.; Andrews, M. T.; Corcoran, E. C.; Dimayuga, I.; Kelly, D. G.; Li, L.; Sur, B.
2016-12-01
A delayed neutron counting (DNC) system, where the sample to be analyzed remains stationary in a thermal neutron beam outside of the reactor, has been developed at the National Research Universal (NRU) reactor of the Canadian Nuclear Laboratories (CNL) at Chalk River. The new in-beam DNC is a novel approach for non-destructive characterization of special nuclear materials (SNM) that could enable identification and quantification of fissile isotopes within a large and shielded sample. Despite the orders of magnitude reduction in neutron flux, the in-beam DNC method can be as informative as the conventional in-core DNC for most cases while offering practical advantages and mitigated risk when dealing with large radioactive samples of unknown origin. This paper addresses (1) the qualification of in-beam DNC using a monochromatic thermal neutron beam in conjunction with a proven counting apparatus designed originally for in-core DNC, and (2) application of in-beam DNC to an examination of large sealed capsules containing unknown radioactive materials. Initial results showed that the in-beam DNC setup permits non-destructive analysis of bulky and gamma shielded samples. The method does not lend itself to trace analysis, and at best could only reveal the presence of a few milligrams of 235U via the assay of in-beam DNC total counts. Through analysis of DNC count rates, the technique could be used in combination with other neutron or gamma techniques to quantify isotopes present within samples.
Tammas-Williams, S; Withers, P J; Todd, I; Prangnell, P B
2017-08-04
Without post-manufacture HIPing the fatigue life of electron beam melting (EBM) additively manufactured parts is currently dominated by the presence of porosity, exhibiting large amounts of scatter. Here we have shown that the size and location of these defects is crucial in determining the fatigue life of EBM Ti-6Al-4V samples. X-ray computed tomography has been used to characterise all the pores in fatigue samples prior to testing and to follow the initiation and growth of fatigue cracks. This shows that the initiation stage comprises a large fraction of life (>70%). In these samples the initiating defect was often some way from being the largest (merely within the top 35% of large defects). Using various ranking strategies including a range of parameters, we found that when the proximity to the surface and the pore aspect ratio were included the actual initiating defect was within the top 3% of defects ranked most harmful. This lays the basis for considering how the deposition parameters can be optimised to ensure that the distribution of pores is tailored to the distribution of applied stresses in additively manufactured parts to maximise the fatigue life for a given loading cycle.
ClustENM: ENM-Based Sampling of Essential Conformational Space at Full Atomic Resolution
Kurkcuoglu, Zeynep; Bahar, Ivet; Doruker, Pemra
2016-01-01
Accurate sampling of conformational space and, in particular, the transitions between functional substates has been a challenge in molecular dynamic (MD) simulations of large biomolecular systems. We developed an Elastic Network Model (ENM)-based computational method, ClustENM, for sampling large conformational changes of biomolecules with various sizes and oligomerization states. ClustENM is an iterative method that combines ENM with energy minimization and clustering steps. It is an unbiased technique, which requires only an initial structure as input, and no information about the target conformation. To test the performance of ClustENM, we applied it to six biomolecular systems: adenylate kinase (AK), calmodulin, p38 MAP kinase, HIV-1 reverse transcriptase (RT), triosephosphate isomerase (TIM), and the 70S ribosomal complex. The generated ensembles of conformers determined at atomic resolution show good agreement with experimental data (979 structures resolved by X-ray and/or NMR) and encompass the subspaces covered in independent MD simulations for TIM, p38, and RT. ClustENM emerges as a computationally efficient tool for characterizing the conformational space of large systems at atomic detail, in addition to generating a representative ensemble of conformers that can be advantageously used in simulating substrate/ligand-binding events. PMID:27494296
Efficient ICCG on a shared memory multiprocessor
NASA Technical Reports Server (NTRS)
Hammond, Steven W.; Schreiber, Robert
1989-01-01
Different approaches are discussed for exploiting parallelism in the ICCG (Incomplete Cholesky Conjugate Gradient) method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that ignoring symmetry and storing the whole matrix can reduce solution time substantially.
Inorganic separator technology program
NASA Technical Reports Server (NTRS)
Smatko, J. S.; Weaver, R. D.; Kalhammer, F. R.
1973-01-01
Testing and failure analyses of silver zinc cells with largely inorganic separators were performed. The results showed that the wet stand and cycle life objective of the silver-zinc cell development program were essentially accomplished and led to recommendations for cell composition, design, and operation that should yield further improvement in wet and cycle life. A series of advanced inorganic materials was successfully developed and formulated into rigid and semiflexible separator samples. Suitable screening tests for evaluation of largely inorganic separators were selected and modified for application to the separator materials. The results showed that many of these formulations are potentially superior to previously used materials and permitted selection of three promising materials for further evaluation in silver-zinc cells.
Role of space charges on light-induced effects in nematic liquid crystals doped by methyl red.
Lucchetti, L; Simoni, F
2014-03-01
We show that both the extraordinarily large nonlinear response and the light-induced permanent reorientation in liquid crystals doped by the azo dye methyl red originates from the same phenomenon of modification of the charge density on the irradiated surface. The demonstration is done by applying ac voltage to the samples, showing that in this case no permanent anchoring is possible. The measurements confirm the role of photoisomerization that gives a transient contribution to the actual reorientation process only in the high dose regime. This result allows us to draw a picture for light-induced effects that might be applied to a large class of compounds.
Fünfstück, Tillmann; Arandjelovic, Mimi; Morgan, David B.; Sanz, Crickette; Reed, Patricia; Olson, Sarah H.; Cameron, Ken; Ondzie, Alain; Peeters, Martine; Vigilant, Linda
2015-01-01
Populations of an organism living in marked geographical or evolutionary isolation from other populations of the same species are often termed subspecies and expected to show some degree of genetic distinctiveness. The common chimpanzee (Pan troglodytes) is currently described as four geographically delimited subspecies: the western (P. t. verus), the nigerian-cameroonian (P. t. ellioti), the central (P. t. troglodytes) and the eastern (P. t. schweinfurthii) chimpanzees. Although these taxa would be expected to be reciprocally monophyletic, studies have not always consistently resolved the central and eastern chimpanzee taxa. Most studies, however, used data from individuals of unknown or approximate geographic provenance. Thus, genetic data from samples of known origin may shed light on the evolutionary relationship of these subspecies. We generated microsatellite genotypes from noninvasively collected fecal samples of 185 central chimpanzees that were sampled across large parts of their range and analyzed them together with 283 published eastern chimpanzee genotypes from known localities. We observed a clear signal of isolation by distance across both subspecies. Further, we found that a large proportion of comparisons between groups taken from the same subspecies showed higher genetic differentiation than the least differentiated between-subspecies comparison. This proportion decreased substantially when we simulated a more clumped sampling scheme by including fewer groups. Our results support the general concept that the distribution of the sampled individuals can dramatically affect the inference of genetic population structure. With regard to chimpanzees, our results emphasize the close relationship of equatorial chimpanzees from central and eastern equatorial Africa and the difficult nature of subspecies definitions. PMID:25330245
Spatial Variation in Particulate Matter Components over a Large Urban Area
Fruin, Scott; Urman, Robert; Lurmann, Fred; McConnell, Rob; Gauderman, James; Rappaport, Ed; Franklin, Meredith; Gilliland, Frank D.; Shafer, Martin; Gorski, Patrick; Avol, Ed
2014-01-01
To characterize exposures to particulate matter (PM) and its components, we performed a large sampling study of small-scale spatial variation in size-resolved particle mass and composition. PM was collected in size ranges of < 0.2, 0.2-to-2.5, and 2.5-to-10 μm on a scale of 100s to 1000s of meters to capture local sources. Within each of eight Southern California communities, up to 29 locations were sampled for rotating, month-long integrated periods at two different times of the year, six months apart, from Nov 2008 through Dec 2009. Additional sampling was conducted at each community’s regional monitoring station to provide temporal coverage over the sampling campaign duration. Residential sampling locations were selected based on a novel design stratified by high- and low-predicted traffic emissions and locations over- and under-predicted from previous dispersion model and sampling comparisons. Primary vehicle emissions constituents, such as elemental carbon (EC), showed much stronger patterns of association with traffic than pollutants with significant secondary formation, such as PM2.5 or water soluble organic carbon. Associations were also stronger during cooler times of the year (Oct through Mar). Primary pollutants also showed greater within-community spatial variation compared to pollutants with secondary formation contributions. For example, the average cool-season community mean and standard deviation (SD) for EC were 1.1 and 0.17 μg/m3, respectively, giving a coefficient of variation (CV) of 18%. For PM2.5, average mean and SD were 14 and 1.3 μg/m3, respectively, with a CV of 9%. We conclude that within-community spatial differences are important for accurate exposure assessment of traffic-related pollutants. PMID:24578605
Fünfstück, Tillmann; Arandjelovic, Mimi; Morgan, David B; Sanz, Crickette; Reed, Patricia; Olson, Sarah H; Cameron, Ken; Ondzie, Alain; Peeters, Martine; Vigilant, Linda
2015-02-01
Populations of an organism living in marked geographical or evolutionary isolation from other populations of the same species are often termed subspecies and expected to show some degree of genetic distinctiveness. The common chimpanzee (Pan troglodytes) is currently described as four geographically delimited subspecies: the western (P. t. verus), the nigerian-cameroonian (P. t. ellioti), the central (P. t. troglodytes) and the eastern (P. t. schweinfurthii) chimpanzees. Although these taxa would be expected to be reciprocally monophyletic, studies have not always consistently resolved the central and eastern chimpanzee taxa. Most studies, however, used data from individuals of unknown or approximate geographic provenance. Thus, genetic data from samples of known origin may shed light on the evolutionary relationship of these subspecies. We generated microsatellite genotypes from noninvasively collected fecal samples of 185 central chimpanzees that were sampled across large parts of their range and analyzed them together with 283 published eastern chimpanzee genotypes from known localities. We observed a clear signal of isolation by distance across both subspecies. Further, we found that a large proportion of comparisons between groups taken from the same subspecies showed higher genetic differentiation than the least differentiated between-subspecies comparison. This proportion decreased substantially when we simulated a more clumped sampling scheme by including fewer groups. Our results support the general concept that the distribution of the sampled individuals can dramatically affect the inference of genetic population structure. With regard to chimpanzees, our results emphasize the close relationship of equatorial chimpanzees from central and eastern equatorial Africa and the difficult nature of subspecies definitions. Copyright © 2014 Wiley Periodicals, Inc.
Toward Scalable Boson Sampling with Photon Loss
NASA Astrophysics Data System (ADS)
Wang, Hui; Li, Wei; Jiang, Xiao; He, Y.-M.; Li, Y.-H.; Ding, X.; Chen, M.-C.; Qin, J.; Peng, C.-Z.; Schneider, C.; Kamp, M.; Zhang, W.-J.; Li, H.; You, L.-X.; Wang, Z.; Dowling, J. P.; Höfling, S.; Lu, Chao-Yang; Pan, Jian-Wei
2018-06-01
Boson sampling is a well-defined task that is strongly believed to be intractable for classical computers, but can be efficiently solved by a specific quantum simulator. However, an outstanding problem for large-scale experimental boson sampling is the scalability. Here we report an experiment on boson sampling with photon loss, and demonstrate that boson sampling with a few photons lost can increase the sampling rate. Our experiment uses a quantum-dot-micropillar single-photon source demultiplexed into up to seven input ports of a 16 ×16 mode ultralow-loss photonic circuit, and we detect three-, four- and fivefold coincidence counts. We implement and validate lossy boson sampling with one and two photons lost, and obtain sampling rates of 187, 13.6, and 0.78 kHz for five-, six-, and seven-photon boson sampling with two photons lost, which is 9.4, 13.9, and 18.0 times faster than the standard boson sampling, respectively. Our experiment shows an approach to significantly enhance the sampling rate of multiphoton boson sampling.
Toward Scalable Boson Sampling with Photon Loss.
Wang, Hui; Li, Wei; Jiang, Xiao; He, Y-M; Li, Y-H; Ding, X; Chen, M-C; Qin, J; Peng, C-Z; Schneider, C; Kamp, M; Zhang, W-J; Li, H; You, L-X; Wang, Z; Dowling, J P; Höfling, S; Lu, Chao-Yang; Pan, Jian-Wei
2018-06-08
Boson sampling is a well-defined task that is strongly believed to be intractable for classical computers, but can be efficiently solved by a specific quantum simulator. However, an outstanding problem for large-scale experimental boson sampling is the scalability. Here we report an experiment on boson sampling with photon loss, and demonstrate that boson sampling with a few photons lost can increase the sampling rate. Our experiment uses a quantum-dot-micropillar single-photon source demultiplexed into up to seven input ports of a 16×16 mode ultralow-loss photonic circuit, and we detect three-, four- and fivefold coincidence counts. We implement and validate lossy boson sampling with one and two photons lost, and obtain sampling rates of 187, 13.6, and 0.78 kHz for five-, six-, and seven-photon boson sampling with two photons lost, which is 9.4, 13.9, and 18.0 times faster than the standard boson sampling, respectively. Our experiment shows an approach to significantly enhance the sampling rate of multiphoton boson sampling.
[Structure and luminescence properties of MgGa2O4 : Cr3+ with Zn substituted for Mg].
Zhang, Wan-Xin; Wang, Yin-Hai; Li, Hai-Ling; Wang, Xian-Sheng; Zhao, Hui
2013-01-01
A series of red long afterglow phosphors with composition Zn(x) Mg(1-2) Ga2 O4 : Cr3+ (x = 0, 0.2, 0.6, 0.8, 1.0) were synthesized by a high temperature solid-state reaction method. The X-ray diffraction studies show that the phase of the phosphors is face-centered cubic structure. Photoluminescence spectra show that the red emission of Cr3+ originated from the transition of 2E-4A2. Due to the large overlap between absorption band of Cr3+ and emission band of the host. Cr3+ could obtain the excitation energy from the host via the effective energy transfer. The afterglow decay characteristics show that the phosphor samples with different Zn contents have different afterglow time and the afterglow time also changes with the value of x. The measurement of thermoluminescence reveals that the trap depth of the phosphor samples with different Zn contents is different. The samples with deeper traps have longer afterglow time.
NASA Astrophysics Data System (ADS)
Stiefenhofer, Johann; Thurston, Malcolm L.; Bush, David E.
2018-04-01
Microdiamonds offer several advantages as a resource estimation tool, such as access to deeper parts of a deposit which may be beyond the reach of large diameter drilling (LDD) techniques, the recovery of the total diamond content in the kimberlite, and a cost benefit due to the cheaper treatment cost compared to large diameter samples. In this paper we take the first step towards local estimation by showing that micro-diamond samples can be treated as a regionalised variable suitable for use in geostatistical applications and we show examples of such output. Examples of microdiamond variograms are presented, the variance-support relationship for microdiamonds is demonstrated and consistency of the diamond size frequency distribution (SFD) is shown with the aid of real datasets. The focus therefore is on why local microdiamond estimation should be possible, not how to generate such estimates. Data from our case studies and examples demonstrate a positive correlation between micro- and macrodiamond sample grades as well as block estimates. This relationship can be demonstrated repeatedly across multiple mining operations. The smaller sample support size for microdiamond samples is a key difference between micro- and macrodiamond estimates and this aspect must be taken into account during the estimation process. We discuss three methods which can be used to validate or reconcile the estimates against macrodiamond data, either as estimates or in the form of production grades: (i) reconcilliation using production data, (ii) by comparing LDD-based grade estimates against microdiamond-based estimates and (iii) using simulation techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parisi, M. C.; Clariá, J. J.; Marcionni, N.
2015-05-15
We obtained spectra of red giants in 15 Small Magellanic Cloud (SMC) clusters in the region of the Ca ii lines with FORS2 on the Very Large Telescope. We determined the mean metallicity and radial velocity with mean errors of 0.05 dex and 2.6 km s{sup −1}, respectively, from a mean of 6.5 members per cluster. One cluster (B113) was too young for a reliable metallicity determination and was excluded from the sample. We combined the sample studied here with 15 clusters previously studied by us using the same technique, and with 7 clusters whose metallicities determined by other authorsmore » are on a scale similar to ours. This compilation of 36 clusters is the largest SMC cluster sample currently available with accurate and homogeneously determined metallicities. We found a high probability that the metallicity distribution is bimodal, with potential peaks at −1.1 and −0.8 dex. Our data show no strong evidence of a metallicity gradient in the SMC clusters, somewhat at odds with recent evidence from Ca ii triplet spectra of a large sample of field stars. This may be revealing possible differences in the chemical history of clusters and field stars. Our clusters show a significant dispersion of metallicities, whatever age is considered, which could be reflecting the lack of a unique age–metallicity relation in this galaxy. None of the chemical evolution models currently available in the literature satisfactorily represents the global chemical enrichment processes of SMC clusters.« less
Application-Specific Graph Sampling for Frequent Subgraph Mining and Community Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purohit, Sumit; Choudhury, Sutanay; Holder, Lawrence B.
Graph mining is an important data analysis methodology, but struggles as the input graph size increases. The scalability and usability challenges posed by such large graphs make it imperative to sample the input graph and reduce its size. The critical challenge in sampling is to identify the appropriate algorithm to insure the resulting analysis does not suffer heavily from the data reduction. Predicting the expected performance degradation for a given graph and sampling algorithm is also useful. In this paper, we present different sampling approaches for graph mining applications such as Frequent Subgrpah Mining (FSM), and Community Detection (CD). Wemore » explore graph metrics such as PageRank, Triangles, and Diversity to sample a graph and conclude that for heterogeneous graphs Triangles and Diversity perform better than degree based metrics. We also present two new sampling variations for targeted graph mining applications. We present empirical results to show that knowledge of the target application, along with input graph properties can be used to select the best sampling algorithm. We also conclude that performance degradation is an abrupt, rather than gradual phenomena, as the sample size decreases. We present the empirical results to show that the performance degradation follows a logistic function.« less
The susceptibility of large river basins to orogenic and climatic drivers
NASA Astrophysics Data System (ADS)
Haedke, Hanna; Wittmann, Hella; von Blanckenburg, Friedhelm
2017-04-01
Large rivers are known to buffer pulses in sediment production driven by changes in climate as sediment is transported through lowlands. Our new dataset of in situ cosmogenic nuclide concentration and chemical composition of 62 sandy bedload samples from the world largest rivers integrates over 25% of Earth's terrestrial surface, distributed over a variety of climatic zones across all continents, and represents the millennial-scale denudation rate of the sediment's source area. We can show that these denudation rates do not respond to climatic forcing, but faithfully record orogenic forcing, when analyzed with respective variables representing orogeny (strain rate, relief, bouguer anomaly, free-air anomaly), and climate (runoff, temperature, precipitation) and basin properties (floodplain response time, drainage area). In contrast to this orogenic forcing of denudation rates, elemental bedload chemistry from the fine-grained portion of the same samples correlates with climate-related variables (precipitation, runoff) and floodplain response times. It is also well-known from previous compilations of river-gauged sediment loads that the short-term basin-integrated sediment export is also climatically controlled. The chemical composition of detrital sediment shows a climate control that can originate in the rivers source area, but this signal is likely overprinted during transfer through the lowlands because we also find correlation with floodplain response times. At the same time, cosmogenic nuclides robustly preserve the orogenic forcing of the source area denudation signal through of the floodplain buffer. Conversely, previous global compilations of cosmogenic nuclides in small river basins show the preservation of climate drivers in their analysis, but these are buffered in large lowland rivers. Hence, we can confirm the assumption that cosmogenic nuclides in large rivers are poorly susceptible to climate changes, but are at the same time highly suited to detect changes in orogenic forcing in their paleo sedimentary records.
Bourdin, C; Busse, A; Kouamou, E; Touafek, F; Bodaghi, B; Le Hoang, P; Mazier, D; Paris, L; Fekkar, A
2014-11-01
PCR detection of Toxoplasma gondii in blood has been suggested as a possibly efficient method for the diagnosis of ocular toxoplasmosis (OT) and furthermore for genotyping the strain involved in the disease. To assess this hypothesis, we performed PCR with 121 peripheral blood samples from 104 patients showing clinical and/or biological evidence of ocular toxoplasmosis and from 284 (258 patients) controls. We tested 2 different extraction protocols, using either 200 μl (small volume) or 2 ml (large volume) of whole blood. Sensitivity was poor, i.e., 4.1% and 25% for the small- and large-volume extractions, respectively. In comparison, PCR with ocular samples yielded 35.9% sensitivity, while immunoblotting and calculation of the Goldmann-Witmer coefficient yielded 47.6% and 72.3% sensitivities, respectively. Performing these three methods together provided 89.4% sensitivity. Whatever the origin of the sample (ocular or blood), PCR provided higher sensitivity for immunocompromised patients than for their immunocompetent counterparts. Consequently, PCR detection of Toxoplasma gondii in blood samples cannot currently be considered a sufficient tool for the diagnosis of OT, and ocular sampling remains necessary for the biological diagnosis of OT. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
BLIND ordering of large-scale transcriptomic developmental timecourses.
Anavy, Leon; Levin, Michal; Khair, Sally; Nakanishi, Nagayasu; Fernandez-Valverde, Selene L; Degnan, Bernard M; Yanai, Itai
2014-03-01
RNA-Seq enables the efficient transcriptome sequencing of many samples from small amounts of material, but the analysis of these data remains challenging. In particular, in developmental studies, RNA-Seq is challenged by the morphological staging of samples, such as embryos, since these often lack clear markers at any particular stage. In such cases, the automatic identification of the stage of a sample would enable previously infeasible experimental designs. Here we present the 'basic linear index determination of transcriptomes' (BLIND) method for ordering samples comprising different developmental stages. The method is an implementation of a traveling salesman algorithm to order the transcriptomes according to their inter-relationships as defined by principal components analysis. To establish the direction of the ordered samples, we show that an appropriate indicator is the entropy of transcriptomic gene expression levels, which increases over developmental time. Using BLIND, we correctly recover the annotated order of previously published embryonic transcriptomic timecourses for frog, mosquito, fly and zebrafish. We further demonstrate the efficacy of BLIND by collecting 59 embryos of the sponge Amphimedon queenslandica and ordering their transcriptomes according to developmental stage. BLIND is thus useful in establishing the temporal order of samples within large datasets and is of particular relevance to the study of organisms with asynchronous development and when morphological staging is difficult.
Garrido, Luis Eduardo; Barrada, Juan Ramón; Aguasvivas, José Armando; Martínez-Molina, Agustín; Arias, Víctor B; Golino, Hudson F; Legaz, Eva; Ferrís, Gloria; Rojo-Moreno, Luis
2018-06-01
During the present decade a large body of research has employed confirmatory factor analysis (CFA) to evaluate the factor structure of the Strengths and Difficulties Questionnaire (SDQ) across multiple languages and cultures. However, because CFA can produce strongly biased estimations when the population cross-loadings differ meaningfully from zero, it may not be the most appropriate framework to model the SDQ responses. With this in mind, the current study sought to assess the factorial structure of the SDQ using the more flexible exploratory structural equation modeling approach. Using a large-scale Spanish sample composed of 67,253 youths aged between 10 and 18 years ( M = 14.16, SD = 1.07), the results showed that CFA provided a severely biased and overly optimistic assessment of the underlying structure of the SDQ. In contrast, exploratory structural equation modeling revealed a generally weak factorial structure, including questionable indicators with large cross-loadings, multiple error correlations, and significant wording variance. A subsequent Monte Carlo study showed that sample sizes greater than 4,000 would be needed to adequately recover the SDQ loading structure. The findings from this study prevent recommending the SDQ as a screening tool and suggest caution when interpreting previous results in the literature based on CFA modeling.
Study of Evaporation Rate of Water in Hydrophobic Confinement using Forward Flux Sampling
NASA Astrophysics Data System (ADS)
Sharma, Sumit; Debenedetti, Pablo G.
2012-02-01
Drying of hydrophobic cavities is of interest in understanding biological self assembly, protein stability and opening and closing of ion channels. Liquid-to-vapor transition of water in confinement is associated with large kinetic barriers which preclude its study using conventional simulation techniques. Using forward flux sampling to study the kinetics of the transition between two hydrophobic surfaces, we show that a) the free energy barriers to evaporation scale linearly with the distance between the two surfaces, d; b) the evaporation rates increase as the lateral size of the surfaces, L increases, and c) the transition state to evaporation for sufficiently large L is a cylindrical vapor cavity connecting the two hydrophobic surfaces. Finally, we decouple the effects of confinement geometry and surface chemistry on the evaporation rates.
Daytime sky polarization calibration limitations
NASA Astrophysics Data System (ADS)
Harrington, David M.; Kuhn, Jeffrey R.; Ariste, Arturo López
2017-01-01
The daytime sky has recently been demonstrated as a useful calibration tool for deriving polarization cross-talk properties of large astronomical telescopes. The Daniel K. Inouye Solar Telescope and other large telescopes under construction can benefit from precise polarimetric calibration of large mirrors. Several atmospheric phenomena and instrumental errors potentially limit the technique's accuracy. At the 3.67-m AEOS telescope on Haleakala, we performed a large observing campaign with the HiVIS spectropolarimeter to identify limitations and develop algorithms for extracting consistent calibrations. Effective sampling of the telescope optical configurations and filtering of data for several derived parameters provide robustness to the derived Mueller matrix calibrations. Second-order scattering models of the sky show that this method is relatively insensitive to multiple-scattering in the sky, provided calibration observations are done in regions of high polarization degree. The technique is also insensitive to assumptions about telescope-induced polarization, provided the mirror coatings are highly reflective. Zemax-derived polarization models show agreement between the functional dependence of polarization predictions and the corresponding on-sky calibrations.
Poss, Jeffrey W; Hirdes, John P; Fries, Brant E; McKillop, Ian; Chase, Mary
2008-04-01
The case-mix system Resource Utilization Groups version III for Home Care (RUG-III/HC) was derived using a modest data sample from Michigan, but to date no comprehensive large scale validation has been done. This work examines the performance of the RUG-III/HC classification using a large sample from Ontario, Canada. Cost episodes over a 13-week period were aggregated from individual level client billing records and matched to assessment information collected using the Resident Assessment Instrument for Home Care, from which classification rules for RUG-III/HC are drawn. The dependent variable, service cost, was constructed using formal services plus informal care valued at approximately one-half that of a replacement worker. An analytic dataset of 29,921 episodes showed a skewed distribution with over 56% of cases falling into the lowest hierarchical level, reduced physical functions. Case-mix index values for formal and informal cost showed very close similarities to those found in the Michigan derivation. Explained variance for a function of combined formal and informal cost was 37.3% (20.5% for formal cost alone), with personal support services as well as informal care showing the strongest fit to the RUG-III/HC classification. RUG-III/HC validates well compared with the Michigan derivation work. Potential enhancements to the present classification should consider the large numbers of undifferentiated cases in the reduced physical function group, and the low explained variance for professional disciplines.
Nondestructive evaluation of loading and fatigue effects in Haynes(R) 230(R) alloy
NASA Astrophysics Data System (ADS)
Saleh, Tarik Adel
Nondestructive evaluation is a useful method for studying the effects of deformation and fatigue. In this dissertation I employed neutron and X-ray diffraction, nonlinear resonant ultrasound spectroscopy (NRUS), and infrared thermography to study the effects of deformation and fatigue on two different nickel based superalloys. The alloys studied were HAYNES 230, a solid solution strengthened alloy with 4% M6C carbides, and secondarily HASTELLOY C-2000 a similar single phase alloy. Using neutron and X-ray diffraction, the deformation behavior of HAYNES 230 was revealed to be composite-like during compression, but unusual in tension, where the carbides provide strengthening until just after the macroscopic yield strength and then they begin to debond and crack, creating a tension-compression asymmetry that is revealed clearly by in situ diffraction. In fatigue of HAYNES 230, the hkl elastic strains changed very little in tension-tension fatigue. However, in situ tension-compression studies showed large changes over the initial stages of fatigue. The HAYNES 230 samples studies had two distinct starting textures, measured by neutron diffraction. Some samples were texture free initially and deformed in tension and compression to fiber textures. Other samples started with a bimodal texture due to cross-rolling and incomplete annealing. The final texture of these bimodal samples is shown through modeling to be a superposition of the initial texture and typical FCC deformation mechanisms. The texture-free samples deformed significantly more macroscopically and in internal elastic strains than the samples with the cross-rolled texture. In contrast to the relative insensitivity of neutron diffraction to the effects of tension-tension fatigue, NRUS revealed large differences between as-received and progressively fatigued samples. This showed that microcracking and void formation are the primary mechanisms responsible for fatigue damage in tension-tension fatigue. NRUS is shown to be a useful complimentary technique to neutron diffraction to evaluate fatigue damage. Finally, infrared thermography is used to show temperature changes over the course of fatigue in HASTELLOY C-2000. Four stages of temperature are shown over the course of a single fatigue test. Both empirical and theoretical relationships between steady state temperature and fatigue life are developed and presented.
Atomic engineering of spin valves using Ag as a surfactant
NASA Astrophysics Data System (ADS)
Yang, David X.; Shashishekar, B.; Chopra, Harsh Deep; Chen, P. J.; Egelhoff, W. F.
2001-06-01
In this study, dc magnetron sputtered NiO (50 nm)/Co (2.5 nm)/Cu(1.5 nm)/Co (3.0 nm) bottom spin valves were studied with and without Ag as a surfactant. At Cu spacer thickness of 1.5 nm, a strong positive coupling >13.92 kA/m (>175 Oe) between NiO-pinned and "free" Co layers leads to a negligible giant magnetoresistance (GMR) effect (<0.7%) in Ag-free samples. In contrast, spin valves deposited in the presence of ≈1 monolayer of surfactant Ag have sufficiently reduced coupling, 5.65 kA/m (71 Oe), which results in an order of magnitude increase in GMR (8.5%). Using transmission electron microscopy (TEM), the large positive coupling in Ag-free samples could directly be attributed to the presence of numerous pinholes. In situ x-ray photoelectron spectroscopy shows that, in Ag-containing samples, the large mobile Ag atoms float out to the surface during successive growth of Co and Cu layers. Detailed TEM studies show that surfactant Ag leaves behind smoother interfaces less prone to pinholes. The use of surfactants also illustrates their efficacy in favorably altering the magnetic characteristics of GMR spin valves, and their potential use in other magnetoelectronics devices and multilayer systems.
Flexible conformable hydrophobized surfaces for turbulent flow drag reduction
Brennan, Joseph C; Geraldi, Nicasio R; Morris, Robert H; Fairhurst, David J; McHale, Glen; Newton, Michael I
2015-01-01
In recent years extensive work has been focused onto using superhydrophobic surfaces for drag reduction applications. Superhydrophobic surfaces retain a gas layer, called a plastron, when submerged underwater in the Cassie-Baxter state with water in contact with the tops of surface roughness features. In this state the plastron allows slip to occur across the surface which results in a drag reduction. In this work we report flexible and relatively large area superhydrophobic surfaces produced using two different methods: Large roughness features were created by electrodeposition on copper meshes; Small roughness features were created by embedding carbon nanoparticles (soot) into Polydimethylsiloxane (PDMS). Both samples were made into cylinders with a diameter under 12 mm. To characterize the samples, scanning electron microscope (SEM) images and confocal microscope images were taken. The confocal microscope images were taken with each sample submerged in water to show the extent of the plastron. The hydrophobized electrodeposited copper mesh cylinders showed drag reductions of up to 32% when comparing the superhydrophobic state with a wetted out state. The soot covered cylinders achieved a 30% drag reduction when comparing the superhydrophobic state to a plain cylinder. These results were obtained for turbulent flows with Reynolds numbers 10,000 to 32,500. PMID:25975704
The isolation of salmonellas from British pork sausages and sausage meat.
Roberts, D.; Boag, K.; Hall, M. L.; Shipp, C. R.
1975-01-01
Between 1969 and 1974, 1467 packets (3309 samples) of pork sausages and sausage meat produced by two large and two medium sized manufacturers and several local butchers were examined for the presence of salmonellas. Of these, 435 packets (786 samples) were found to contain salmonellas, but there was a wide variation in the isolation rates according to the producer. The salmonella incidence in samples from several small and two medium sized producers was low (0-11%) while the results from the two large producers investigated showed a striking difference, the rate of salmonella contamination in the product of one was low (about 2%) and in that of the other consistently high (40-60%). A comparison of liquid enrichment media, incubation temperatures and selective agar media was also carried out to determine the most efficient combination for the isolation of salmonellas from minced meat products. The results showed that (a) incubation of enrichment cultures at 43 degrees C. yielded a consistently greater number of salmonella isolations that at 37 degrees C., regardless of plating medium, (b) tetrathionate broth A (Rolfe) was superior to selenite broth as en enrichment medium at both 37 and 43 degrees C. and (c) brilliant green agar gave better results than deoxycholate citrate sucrose agar and bismuth sulphite agar as a selective medium. PMID:1100710
The isolation of salmonellas from British pork sausages and sausage meat.
Roberts, D; Boag, K; Hall, M L; Shipp, C R
1975-10-01
Between 1969 and 1974, 1467 packets (3309 samples) of pork sausages and sausage meat produced by two large and two medium sized manufacturers and several local butchers were examined for the presence of salmonellas. Of these, 435 packets (786 samples) were found to contain salmonellas, but there was a wide variation in the isolation rates according to the producer. The salmonella incidence in samples from several small and two medium sized producers was low (0-11%) while the results from the two large producers investigated showed a striking difference, the rate of salmonella contamination in the product of one was low (about 2%) and in that of the other consistently high (40-60%). A comparison of liquid enrichment media, incubation temperatures and selective agar media was also carried out to determine the most efficient combination for the isolation of salmonellas from minced meat products. The results showed that (a) incubation of enrichment cultures at 43 degrees C. yielded a consistently greater number of salmonella isolations that at 37 degrees C., regardless of plating medium, (b) tetrathionate broth A (Rolfe) was superior to selenite broth as en enrichment medium at both 37 and 43 degrees C. and (c) brilliant green agar gave better results than deoxycholate citrate sucrose agar and bismuth sulphite agar as a selective medium.
Comparative evaluation of saliva collection methods for proteome analysis.
Golatowski, Claas; Salazar, Manuela Gesell; Dhople, Vishnu Mukund; Hammer, Elke; Kocher, Thomas; Jehmlich, Nico; Völker, Uwe
2013-04-18
Saliva collection devices are widely used for large-scale screening approaches. This study was designed to compare the suitability of three different whole-saliva collection approaches for subsequent proteome analyses. From 9 young healthy volunteers (4 women and 5 men) saliva samples were collected either unstimulated by passive drooling or stimulated using a paraffin gum or Salivette® (cotton swab). Saliva volume, protein concentration and salivary protein patterns were analyzed comparatively. Samples collected using paraffin gum showed the highest saliva volume (4.1±1.5 ml) followed by Salivette® collection (1.8±0.4 ml) and drooling (1.0±0.4 ml). Saliva protein concentrations (average 1145 μg/ml) showed no significant differences between the three sampling schemes. Each collection approach facilitated the identification of about 160 proteins (≥2 distinct peptides) per subject, but collection-method dependent variations in protein composition were observed. Passive drooling, paraffin gum and Salivette® each allows similar coverage of the whole saliva proteome, but the specific proteins observed depended on the collection approach. Thus, only one type of collection device should be used for quantitative proteome analysis in one experiment, especially when performing large-scale cross-sectional or multi-centric studies. Copyright © 2013 Elsevier B.V. All rights reserved.
Body Mass Index and Sex Affect Diverse Microbial Niches within the Gut
Borgo, Francesca; Garbossa, Stefania; Riva, Alessandra; Severgnini, Marco; Luigiano, Carmelo; Benetti, Albero; Pontiroli, Antonio E.; Morace, Giulia; Borghi, Elisa
2018-01-01
Gut microbiota is considered a separate organ with endocrine capabilities, actively contributing to tissue homeostasis. It consists of at least two separate microbial populations, the lumen-associated (LAM) and the mucosa-associated microbiota (MAM). In the present study, we compared LAM and MAM, by collecting stools and sigmoid brush samples of forty adults without large-bowel symptoms, and through a 16S rRNA gene next-generation sequencing (NGS) approach. MAM sample analysis revealed enrichment in aerotolerant Proteobacteria, probably selected by a gradient of oxygen that decreases from tissue to lumen, and in Streptococcus and Clostridium spp., highly fermenting bacteria. On the other hand, LAM microbiota showed an increased abundance in Bacteroides, Prevotella, and Oscillospira, genera able to digest and to degrade biopolymers in the large intestine. Predicted metagenomic analysis showed LAM to be enriched in genes encoding enzymes mostly involved in energy extraction from carbohydrates and lipids, whereas MAM in amino acid and vitamin metabolism. Moreover, LAM and MAM communities seemed to be influenced by different host factors, such as diet and sex. LAM is affected by body mass index (BMI) status. Indeed, BMI negatively correlates with Faecalibacterium prausnitzii and Flavonifractor plautii abundance, putative biomarkers of healthy status. In contrast, MAM microbial population showed a significant grouping according to sex. Female MAM was enriched in Actinobacteria (with an increased trend of the genus Bifidobacterium), and a significant depletion in Veillonellaceae. Interestingly, we found the species Gemmiger formicilis to be associated with male and Bifidobacterium adolescentis, with female MAM samples. In conclusion, our results suggest that gut harbors microbial niches that differ in both composition and host factor susceptibility, and their richness and diversity may be overlooked evaluating only fecal samples. PMID:29491857
Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.
2010-01-01
In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273
NASA Astrophysics Data System (ADS)
Kang, Dong-Uk; Cho, Minsik; Lee, Dae Hee; Yoo, Hyunjun; Kim, Myung Soo; Bae, Jun Hyung; Kim, Hyoungtaek; Kim, Jongyul; Kim, Hyunduk; Cho, Gyuseong
2012-05-01
Recently, large-size 3-transistors (3-Tr) active pixel complementary metal-oxide silicon (CMOS) image sensors have been being used for medium-size digital X-ray radiography, such as dental computed tomography (CT), mammography and nondestructive testing (NDT) for consumer products. We designed and fabricated 50 µm × 50 µm 3-Tr test pixels having a pixel photodiode with various structures and shapes by using the TSMC 0.25-m standard CMOS process to compare their optical characteristics. The pixel photodiode output was continuously sampled while a test pixel was continuously illuminated by using 550-nm light at a constant intensity. The measurement was repeated 300 times for each test pixel to obtain reliable results on the mean and the variance of the pixel output at each sampling time. The sampling rate was 50 kHz, and the reset period was 200 msec. To estimate the conversion gain, we used the mean-variance method. From the measured results, the n-well/p-substrate photodiode, among 3 photodiode structures available in a standard CMOS process, showed the best performance at a low illumination equivalent to the typical X-ray signal range. The quantum efficiencies of the n+/p-well, n-well/p-substrate, and n+/p-substrate photodiodes were 18.5%, 62.1%, and 51.5%, respectively. From a comparison of pixels with rounded and rectangular corners, we found that a rounded corner structure could reduce the dark current in large-size pixels. A pixel with four rounded corners showed a reduced dark current of about 200fA compared to a pixel with four rectangular corners in our pixel sample size. Photodiodes with round p-implant openings showed about 5% higher dark current, but about 34% higher sensitivities, than the conventional photodiodes.
Neng, N R; Nogueira, J M F
2012-01-01
The combination of bar adsorptive micro-extraction using activated carbon (AC) and polystyrene-divinylbenzene copolymer (PS-DVB) sorbent phases, followed by liquid desorption and large-volume injection gas chromatography coupled to mass spectrometry, under selected ion monitoring mode acquisition, was developed for the first time to monitor pharmaceutical and personal care products (PPCPs) in environmental water matrices. Assays performed on 25 mL water samples spiked (100 ng L(-1)) with caffeine, gemfibrozil, triclosan, propranolol, carbamazepine and diazepam, selected as model compounds, yielded recoveries ranging from 74% to 99% under optimised experimental conditions (equilibrium time, 16 h (1,000 rpm); matrix characteristics: pH 5, 5% NaCl for AC phase; LD: methanol/acetonitrile (1:1), 45 min). The analytical performance showed good precision (RSD < 18%), convenient detection limits (5-20 ng L(-1)) and excellent linear dynamic range (20-800 ng L(-1)) with remarkable determination coefficients (r(2) > 0.99), where the PS-DVB sorbent phase showed a much better efficiency. By using the standard addition methodology, the application of the present analytical approach on tap, ground, sea, estuary and wastewater samples allowed very good performance at the trace level. The proposed method proved to be a suitable sorption-based micro-extraction alternative for the analysis of priority pollutants with medium-polar to polar characteristics, showing to be easy to implement, reliable, sensitive and requiring a low sample volume to monitor PPCPs in water matrices.
Dynamic permeability in fault damage zones induced by repeated coseismic fracturing events
NASA Astrophysics Data System (ADS)
Aben, F. M.; Doan, M. L.; Mitchell, T. M.
2017-12-01
Off-fault fracture damage in upper crustal fault zones change the fault zone properties and affect various co- and interseismic processes. One of these properties is the permeability of the fault damage zone rocks, which is generally higher than the surrounding host rock. This allows large-scale fluid flow through the fault zone that affects fault healing and promotes mineral transformation processes. Moreover, it might play an important role in thermal fluid pressurization during an earthquake rupture. The damage zone permeability is dynamic due to coseismic damaging. It is crucial for earthquake mechanics and for longer-term processes to understand how the dynamic permeability structure of a fault looks like and how it evolves with repeated earthquakes. To better detail coseismically induced permeability, we have performed uniaxial split Hopkinson pressure bar experiments on quartz-monzonite rock samples. Two sample sets were created and analyzed: single-loaded samples subjected to varying loading intensities - with damage varying from apparently intact to pulverized - and samples loaded at a constant intensity but with a varying number of repeated loadings. The first set resembles a dynamic permeability structure created by a single large earthquake. The second set resembles a permeability structure created by several earthquakes. After, the permeability and acoustic velocities were measured as a function of confining pressure. The permeability in both datasets shows a large and non-linear increase over several orders of magnitude (from 10-20 up to 10-14 m2) with an increasing amount of fracture damage. This, combined with microstructural analyses of the varying degrees of damage, suggests a percolation threshold. The percolation threshold does not coincide with the pulverization threshold. With increasing confining pressure, the permeability might drop up to two orders of magnitude, which supports the possibility of large coseismic fluid pulses over relatively large distances along a fault. Also, a relatively small threshold could potentially increase permeability in a large volume of rock, given that previous earthquakes already damaged these rocks.
Role and Mechanism of Structural Variation in Progression of Breast Cancer
2012-09-01
models of CGR genesis, and strongly argue...numbers of tandem duplications, and GBM samples showing numerous large-‐scale rearrangements. We also...higher incidence in GBM (38.9%) relative to the other tumor types (8.7%). This definitively shows
Maintaining and Enhancing Diversity of Sampled Protein Conformations in Robotics-Inspired Methods.
Abella, Jayvee R; Moll, Mark; Kavraki, Lydia E
2018-01-01
The ability to efficiently sample structurally diverse protein conformations allows one to gain a high-level view of a protein's energy landscape. Algorithms from robot motion planning have been used for conformational sampling, and several of these algorithms promote diversity by keeping track of "coverage" in conformational space based on the local sampling density. However, large proteins present special challenges. In particular, larger systems require running many concurrent instances of these algorithms, but these algorithms can quickly become memory intensive because they typically keep previously sampled conformations in memory to maintain coverage estimates. In addition, robotics-inspired algorithms depend on defining useful perturbation strategies for exploring the conformational space, which is a difficult task for large proteins because such systems are typically more constrained and exhibit complex motions. In this article, we introduce two methodologies for maintaining and enhancing diversity in robotics-inspired conformational sampling. The first method addresses algorithms based on coverage estimates and leverages the use of a low-dimensional projection to define a global coverage grid that maintains coverage across concurrent runs of sampling. The second method is an automatic definition of a perturbation strategy through readily available flexibility information derived from B-factors, secondary structure, and rigidity analysis. Our results show a significant increase in the diversity of the conformations sampled for proteins consisting of up to 500 residues when applied to a specific robotics-inspired algorithm for conformational sampling. The methodologies presented in this article may be vital components for the scalability of robotics-inspired approaches.
Applying Active Learning to Assertion Classification of Concepts in Clinical Text
Chen, Yukun; Mani, Subramani; Xu, Hua
2012-01-01
Supervised machine learning methods for clinical natural language processing (NLP) research require a large number of annotated samples, which are very expensive to build because of the involvement of physicians. Active learning, an approach that actively samples from a large pool, provides an alternative solution. Its major goal in classification is to reduce the annotation effort while maintaining the quality of the predictive model. However, few studies have investigated its uses in clinical NLP. This paper reports an application of active learning to a clinical text classification task: to determine the assertion status of clinical concepts. The annotated corpus for the assertion classification task in the 2010 i2b2/VA Clinical NLP Challenge was used in this study. We implemented several existing and newly developed active learning algorithms and assessed their uses. The outcome is reported in the global ALC score, based on the Area under the average Learning Curve of the AUC (Area Under the Curve) score. Results showed that when the same number of annotated samples was used, active learning strategies could generate better classification models (best ALC – 0.7715) than the passive learning method (random sampling) (ALC – 0.7411). Moreover, to achieve the same classification performance, active learning strategies required fewer samples than the random sampling method. For example, to achieve an AUC of 0.79, the random sampling method used 32 samples, while our best active learning algorithm required only 12 samples, a reduction of 62.5% in manual annotation effort. PMID:22127105
Protection of obstetric dimensions in a small-bodied human sample.
Kurki, Helen K
2007-08-01
In human females, the bony pelvis must find a balance between being small (narrow) for efficient bipedal locomotion, and being large to accommodate a relatively large newborn. It has been shown that within a given population, taller/larger-bodied women have larger pelvic canals. This study investigates whether in a population where small body size is the norm, pelvic geometry (size and shape), on average, shows accommodation to protect the obstetric canal. Osteometric data were collected from the pelves, femora, and clavicles (body size indicators) of adult skeletons representing a range of adult body size. Samples include Holocene Later Stone Age (LSA) foragers from southern Africa (n = 28 females, 31 males), Portuguese from the Coimbra-identified skeletal collection (CISC) (n = 40 females, 40 males) and European-Americans from the Hamann-Todd osteological collection (H-T) (n = 40 females, 40 males). Patterns of sexual dimorphism are similar in the samples. Univariate and multivariate analyses of raw and Mosimann shape-variables indicate that compared to the CISC and H-T females, the LSA females have relatively large midplane and outlet canal planes (particularly posterior and A-P lengths). The LSA males also follow this pattern, although with absolutely smaller pelves in multivariate space. The CISC females, who have equally small stature, but larger body mass, do not show the same type of pelvic canal size and shape accommodation. The results suggest that adaptive allometric modeling in at least some small-bodied populations protects the obstetric canal. These findings support the use of population-specific attributes in the clinical evaluation of obstetric risk. (c) 2007 Wiley-Liss, Inc.
Cruz-Motta, Juan José; Miloslavich, Patricia; Palomo, Gabriela; Iken, Katrin; Konar, Brenda; Pohle, Gerhard; Trott, Tom; Benedetti-Cecchi, Lisandro; Herrera, César; Hernández, Alejandra; Sardi, Adriana; Bueno, Andrea; Castillo, Julio; Klein, Eduardo; Guerra-Castro, Edlin; Gobin, Judith; Gómez, Diana Isabel; Riosmena-Rodríguez, Rafael; Mead, Angela; Bigatti, Gregorio; Knowlton, Ann; Shirayama, Yoshihisa
2010-01-01
Assemblages associated with intertidal rocky shores were examined for large scale distribution patterns with specific emphasis on identifying latitudinal trends of species richness and taxonomic distinctiveness. Seventy-two sites distributed around the globe were evaluated following the standardized sampling protocol of the Census of Marine Life NaGISA project (www.nagisa.coml.org). There were no clear patterns of standardized estimators of species richness along latitudinal gradients or among Large Marine Ecosystems (LMEs); however, a strong latitudinal gradient in taxonomic composition (i.e., proportion of different taxonomic groups in a given sample) was observed. Environmental variables related to natural influences were strongly related to the distribution patterns of the assemblages on the LME scale, particularly photoperiod, sea surface temperature (SST) and rainfall. In contrast, no environmental variables directly associated with human influences (with the exception of the inorganic pollution index) were related to assemblage patterns among LMEs. Correlations of the natural assemblages with either latitudinal gradients or environmental variables were equally strong suggesting that neither neutral models nor models based solely on environmental variables sufficiently explain spatial variation of these assemblages at a global scale. Despite the data shortcomings in this study (e.g., unbalanced sample distribution), we show the importance of generating biological global databases for the use in large-scale diversity comparisons of rocky intertidal assemblages to stimulate continued sampling and analyses. PMID:21179546
Boopathi, Thangavelu; Faria, Daphne Georgina; Cheon, Ju-Yong; Youn, Seok Hyun; Ki, Jang-Seu
2015-01-01
The small and large nuclear subunit molecular phylogeny of the genus Prorocentrum demonstrated that the species are dichotomized into two clades. These two clades were significantly different (one-factor ANOVA, p < 0.01) with patterns compatible for both small and large subunit Bayesian phylogenetic trees, and for a larger taxon sampled dinoflagellate phylogeny. Evaluation of the molecular divergence levels showed that intraspecies genetic variations were significantly low (t-test, p < 0.05), than those for interspecies variations (> 2.9% and > 26.8% dissimilarity in the small and large subunit [D1/D2], respectively). Based on the calculated molecular divergence, the genus comprises two genetically distinct groups that should be considered as two separate genera, thereby setting the pace for major systematic changes for the genus Prorocentrum sensu Dodge. Moreover, the information presented in this study would be useful for improving species identification, detection of novel clades from environmental samples. © 2015 The Author(s) Journal of Eukaryotic Microbiology © 2015 International Society of Protistologists.
Efficient Bayesian mixed model analysis increases association power in large cohorts
Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L
2014-01-01
Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633
Thermoelectric properties of CVD grown large area graphene
NASA Astrophysics Data System (ADS)
Sherehiy, Andriy; Jayasinghe, Ruwantha; Stallard, Robert; Sumanasekera, Gamini; Sidorov, Anton; Benjamin, Daniel; Jiang, Zhigang; Yu, Qingkai; Wu, Wei; Bao, Jiming; Liu, Zhihong; Pei, Steven; Chen, Yong
2010-03-01
The thermoelectric power (TEP) of CVD (Chemical Vapor Deposition) grown large area graphene transferred onto a Si/SiO2 substrate was measured by simply attaching two miniature thermocouples and a resistive heater. Availability of such large area graphene facilitates straight forward TEP measurement without the use of any microfabrication processes. All investigated graphene samples showed a positive TEP ˜ + 30 μV/K in ambient conditions and saturated at a negative value as low as ˜ -75 μV/K after vacuum-annealing at 500 K in a vacuum of ˜10-7 Torr. The observed p-type behavior under ambient conditions is attributed to the oxygen doping, while the n-type behavior under degassed conditions is due to electron doping from SiO2 surface states. It was observed that the sign of the TEP switched from negative to positive for the degassed graphene when exposed to acceptor gases. Conversely, the TEP of vacuum-annealed graphene exposed to the donor gases became even more negative than the TEP of vacuum-annealed sample.
Gaussian vs. Bessel light-sheets: performance analysis in live large sample imaging
NASA Astrophysics Data System (ADS)
Reidt, Sascha L.; Correia, Ricardo B. C.; Donnachie, Mark; Weijer, Cornelis J.; MacDonald, Michael P.
2017-08-01
Lightsheet fluorescence microscopy (LSFM) has rapidly progressed in the past decade from an emerging technology into an established methodology. This progress has largely been driven by its suitability to developmental biology, where it is able to give excellent spatial-temporal resolution over relatively large fields of view with good contrast and low phototoxicity. In many respects it is superseding confocal microscopy. However, it is no magic bullet and still struggles to image deeply in more highly scattering samples. Many solutions to this challenge have been presented, including, Airy and Bessel illumination, 2-photon operation and deconvolution techniques. In this work, we show a comparison between a simple but effective Gaussian beam illumination and Bessel illumination for imaging in chicken embryos. Whilst Bessel illumination is shown to be of benefit when a greater depth of field is required, it is not possible to see any benefits for imaging into the highly scattering tissue of the chick embryo.
Waks, Zeev; Weissbrod, Omer; Carmeli, Boaz; Norel, Raquel; Utro, Filippo; Goldschmidt, Yaara
2016-12-23
Compiling a comprehensive list of cancer driver genes is imperative for oncology diagnostics and drug development. While driver genes are typically discovered by analysis of tumor genomes, infrequently mutated driver genes often evade detection due to limited sample sizes. Here, we address sample size limitations by integrating tumor genomics data with a wide spectrum of gene-specific properties to search for rare drivers, functionally classify them, and detect features characteristic of driver genes. We show that our approach, CAnceR geNe similarity-based Annotator and Finder (CARNAF), enables detection of potentially novel drivers that eluded over a dozen pan-cancer/multi-tumor type studies. In particular, feature analysis reveals a highly concentrated pool of known and putative tumor suppressors among the <1% of genes that encode very large, chromatin-regulating proteins. Thus, our study highlights the need for deeper characterization of very large, epigenetic regulators in the context of cancer causality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, A.; Chatterjee, S.; Das, D., E-mail: ddas@alpha.iuc.res.in
2016-05-23
TbMn{sub 1-x}Fe{sub x}O{sub 3} nanoparticles (NPs) with x = 0, 0.1 and 0.2 have been prepared by adopting the chemical sol-gel method. Phase identification and particle size estimation are done by XRD analysis. M-H measurements at 5 K indicate a complete ferromagnetic behaviour in the Fe-doped samples with large coercivity whereas the pristine sample shows presence of both ferromagnetic and antiferromagnetic orders. ZFC and FC magnetization curves of all samples show signature of antiferromagnetic ordering of both terbium and manganese magnetic moments along with a systematic shift of ordering temperatures with Fe substitution. {sup 57}Fe Mössbauer spectroscopic measurements of the Fe-dopedmore » samples at room temperature confirm the paramagnetic behaviour and reduction of electric field gradient around Fe probe atoms with increase of Fe concentration.« less
Lead theft--a study of the "uniqueness" of lead from church roofs.
Bond, John W; Hainsworth, Sarah V; Lau, Tien L
2013-07-01
In the United Kingdom, theft of lead is common, particularly from churches and other public buildings with lead roofs. To assess the potential to distinguish lead from different sources, 41 samples of lead from 24 church roofs in Northamptonshire, U.K, have been analyzed for relative abundance of trace elements and isotopes of lead using X-ray fluorescence (XRF) and inductively coupled plasma mass spectrometry, respectively. XRF revealed the overall presence of 12 trace elements with the four most abundant, calcium, phosphorus, silicon, and sulfur, showing a large weight percentage standard error of the mean of all samples suggesting variation in the weight percentage of these elements between different church roofs. Multiple samples from the same roofs, but different lead sheets, showed much lower weight percentage standard errors of the mean suggesting similar trace element concentrations. Lead isotope ratios were similar for all samples. Factors likely to affect the occurrence of these trace elements are discussed. © 2013 American Academy of Forensic Sciences.
Scaling ice microstructures from the laboratory to nature: cryo-EBSD on large samples.
NASA Astrophysics Data System (ADS)
Prior, David; Craw, Lisa; Kim, Daeyeong; Peyroux, Damian; Qi, Chao; Seidemann, Meike; Tooley, Lauren; Vaughan, Matthew; Wongpan, Pat
2017-04-01
Electron backscatter diffraction (EBSD) has extended significantly our ability to conduct detailed quantitative microstructural investigations of rocks, metals and ceramics. EBSD on ice was first developed in 2004. Techniques have improved significantly in the last decade and EBSD is now becoming more common in the microstructural analysis of ice. This is particularly true for laboratory-deformed ice where, in some cases, the fine grain sizes exclude the possibility of using a thin section of the ice. Having the orientations of all axes (rather than just the c-axis as in an optical method) yields important new information about ice microstructure. It is important to examine natural ice samples in the same way so that we can scale laboratory observations to nature. In the case of ice deformation, higher strain rates are used in the laboratory than those seen in nature. These are achieved by increasing stress and/or temperature and it is important to assess that the microstructures produced in the laboratory are comparable with those observed in nature. Natural ice samples are coarse grained. Glacier and ice sheet ice has a grain size from a few mm up to several cm. Sea and lake ice has grain sizes of a few cm to many metres. Thus extending EBSD analysis to larger sample sizes to include representative microstructures is needed. The chief impediments to working on large ice samples are sample exchange, limitations on stage motion and temperature control. Large ice samples cannot be transferred through a typical commercial cryo-transfer system that limits sample sizes. We transfer through a nitrogen glove box that encloses the main scanning electron microscope (SEM) door. The nitrogen atmosphere prevents the cold stage and the sample from becoming covered in frost. Having a long optimal working distance for EBSD (around 30mm for the Otago cryo-EBSD facility) , by moving the camera away from the pole piece, enables the stage to move without crashing into either the EBSD camera or the SEM pole piece (final lens). In theory a sample up to 100mm perpendicular to the tilt axis by 150mm parallel to the tilt axis can be analysed. In practice, the motion of our stage is restricted to maximum dimensions of 100 by 50mm by a conductive copper braid on our cold stage. Temperature control becomes harder as the samples become larger. If the samples become too warm then they will start to sublime and the quality of EBSD data will reduce. Large samples need to be relatively thin ( 5mm or less) so that conduction of heat to the cold stage is more effective at keeping the surface temperature low. In the Otago facility samples of up to 40mm by 40mm present little problem and can be analysed for several hours without significant sublimation. Larger samples need more care, e.g. fast sample transfer to keep the sample very cold. The largest samples we work on routinely are 40 by 60mm in size. We will show examples of EBSD data from glacial ice and sea ice from Antarctica and from large laboratory ice samples.
NASA Astrophysics Data System (ADS)
Siegesmund, S.; Vollbrecht, A.; Pros, Z.
1993-10-01
The complete P-wave velocity distribution, preferred orientation of rock-forming minerals and microcracks of two differently deformed orthogneisses from the Kutna Hora Crystalline Unit were investigated. The complete symmetry of P-wave velocities were determined as a function of confining pressure on the basis of 132 independent propagation directions up to 400 MPa. The two samples are of almost identical mineralogical composition, but exhibit different fabrics which can be related to different positions within a large-scale fold structure. The symmetry of the Vp-diagrams change from nearly transversely isotropic for the sample from the limb area to orthorhombic for the sample from the hinge zone, which shows an additional crenulation cleavage. This change of symmetry is observed at all pressure levels. Reorientation of the main velocity directions ( Vpmin, Vpmax, Kpint) between hinge and limb is controlled by the microcrack fabric and the texture of the rock-forming minerals. This can cause significant differences in reflectivity related to fabric changes within large-scale folds.
Effect of the three-dimensional microstructure on the sound absorption of foams: A parametric study.
Chevillotte, Fabien; Perrot, Camille
2017-08-01
The purpose of this work is to systematically study the effect of the throat and the pore sizes on the sound absorbing properties of open-cell foams. The three-dimensional idealized unit cell used in this work enables to mimic the acoustical macro-behavior of a large class of cellular solid foams. This study is carried out for a normal incidence and also for a diffuse field excitation, with a relatively large range of sample thicknesses. The transport and sound absorbing properties are numerically studied as a function of the throat size, the pore size, and the sample thickness. The resulting diagrams show the ranges of the specific throat sizes and pore sizes where the sound absorption grading is maximized due to the pore morphology as a function of the sample thickness, and how it correlates with the corresponding transport parameters. These charts demonstrate, together with typical examples, how the morphological characteristics of foam could be modified in order to increase the visco-thermal dissipation effects.
Development and Validation of the Minnesota Borderline Personality Disorder Scale (MBPD)
Bornovalova, Marina A.; Hicks, Brian M.; Patrick, Christopher J.; Iacono, William G.; McGue, Matt
2011-01-01
While large epidemiological datasets can inform research on the etiology and development of borderline personality disorder (BPD), they rarely include BPD measures. In some cases, however, proxy measures can be constructed using instruments already in these datasets. In this study we developed and validated a self-report measure of BPD from the Multidimensional Personality Questionnaire (MPQ). Items for the new instrument—the Minnesota BPD scale (MBPD)—were identified and refined using three large samples: undergraduates, community adolescent twins, and urban substance users. We determined the construct validity of the MBPD by examining its association with (1) diagnosed BPD, (2) questionnaire reported BPD symptoms, and (3) clinical variables associated with BPD: suicidality, trauma, disinhibition, internalizing distress, and substance use. We also tested the MBPD in two prison inmate samples. Across samples, the MBPD correlated with BPD indices and external criteria, and showed incremental validity above measures of negative affect, thus supporting its construct validity as a measure of BPD. PMID:21467094
A DNA methylation map of human cancer at single base-pair resolution.
Vidal, E; Sayols, S; Moran, S; Guillaumet-Adkins, A; Schroeder, M P; Royo, R; Orozco, M; Gut, M; Gut, I; Lopez-Bigas, N; Heyn, H; Esteller, M
2017-10-05
Although single base-pair resolution DNA methylation landscapes for embryonic and different somatic cell types provided important insights into epigenetic dynamics and cell-type specificity, such comprehensive profiling is incomplete across human cancer types. This prompted us to perform genome-wide DNA methylation profiling of 22 samples derived from normal tissues and associated neoplasms, including primary tumors and cancer cell lines. Unlike their invariant normal counterparts, cancer samples exhibited highly variable CpG methylation levels in a large proportion of the genome, involving progressive changes during tumor evolution. The whole-genome sequencing results from selected samples were replicated in a large cohort of 1112 primary tumors of various cancer types using genome-scale DNA methylation analysis. Specifically, we determined DNA hypermethylation of promoters and enhancers regulating tumor-suppressor genes, with potential cancer-driving effects. DNA hypermethylation events showed evidence of positive selection, mutual exclusivity and tissue specificity, suggesting their active participation in neoplastic transformation. Our data highlight the extensive changes in DNA methylation that occur in cancer onset, progression and dissemination.
NASA Astrophysics Data System (ADS)
Yoskowitz, Joshua; Clark, Morgan; Labrake, Scott; Vineyard, Michael
2015-10-01
We have developed an external beam facility for the 1.1-MV tandem Pelletron accelerator in the Union College Ion Beam Analysis Laboratory. The beam is extracted from an aluminum pipe through a 1 / 4 ' ' diameter window with a 7.5- μm thick Kapton foil. This external beam facility allows us to perform ion beam analysis on samples that cannot be put under vacuum, including wet samples and samples too large to fit into the scattering chamber. We have commissioned the new facility by performing proton induced X-ray emission (PIXE) analysis of several samples of environmental interest. These include samples of artificial turf, running tracks, and a human tooth with an amalgam filling. A 1.7-MeV external proton beam was incident on the samples positioned 2 cm from the window. The resulting X-rays were measured using a silicon drift detector and were analyzed using GUPIX software to determine the concentrations of elements in the samples. The results on the human tooth indicate that while significant concentrations of Hg, Ag, and Sn are present in the amalgam filling, only trace amounts of Hg appear to have leached into the tooth. The artificial turf and running tracks show rather large concentrations of a broad range of elements and trace amounts of Pb in the turf infill.
Properties and spatial distribution of galaxy superclusters
NASA Astrophysics Data System (ADS)
Liivamägi, Lauri Juhan
2017-01-01
Astronomy is a science that can offer plenty of unforgettable imagery, and the large-scale distribution of galaxies is no exception. Among the first features the viewer's eye is likely to be drawn to, are large concentrations of galaxies - galaxy superclusters, contrasting to the seemingly empty regions beside them. Superclusters can extend from tens to over hundred megaparsecs, they contain from hundreds to thousands of galaxies, and many galaxy groups and clusters. Unlike galaxy clusters, superclusters are clearly unrelaxed systems, not gravitationally bound as crossing times exceed the age of the universe, and show little to no radial symmetry. Superclusters, as part of the large-scale structure, are sensitive to the initial power spectrum and the following evolution. They are massive enough to leave an imprint on the cosmic microwave background radiation. Superclusters can also provide an unique environment for their constituent galaxies and galaxy clusters. In this study we used two different observational and one simulated galaxy samples to create several catalogues of structures that, we think, correspond to what are generally considered galaxy superclusters. Superclusters were delineated as continuous over-dense regions in galaxy luminosity density fields. When calculating density fields several corrections were applied to remove small-scale redshift distortions and distance-dependent selection effects. Resulting catalogues of objects display robust statistical properties, showing that flux-limited galaxy samples can be used to create nearly volume-limited catalogues of superstructures. Generally, large superclusters can be regarded as massive, often branching filamentary structures, that are mainly characterised by their length. Smaller superclusters, on the other hand, can display a variety of shapes. Spatial distribution of superclusters shows large-scale variations, with high-density concentrations often found in semi-regularly spaced groups. Future studies are needed to quantify the relations between superclusters and finer details of the galaxy distribution. Supercluster catalogues from this thesis have already been used in numerous other studies.
Environmental impacts of the Chennai oil spill accident - A case study.
Han, Yuling; Nambi, Indumathi M; Prabhakar Clement, T
2018-06-01
Chennai, a coastal city in India with a population of over 7 million people, was impacted by a major oil spill on January 28th 2017. The spill occurred when two cargo ships collided about two miles away from the Chennai shoreline. The accident released about 75 metric tons of heavy fuel oil into the Bay of Bengal. This case study provides field observations and laboratory characterization data for this oil spill accident. Our field observations show that the seawalls and groins, which were installed along the Chennai shoreline to manage coastal erosion problems, played a significant role in controlling the oil deposition patterns. A large amount of oil was trapped within the relatively stagnant zone near the seawall-groin intersection region. The initial cleanup efforts used manual methods to skim the trapped oil and these efforts indeed helped recover large amount of oil. Our laboratory data show that the Chennai oil spill residues have unique fingerprints of hopanes and steranes which can be used to track the spill. Our weathering experiments show that volatilization processes should have played a significant role in degrading the oil during initial hours. The characterization data show that the source oil contained about 503,000 mg/kg of total petroleum hydrocarbons (TPH) and 17,586 mg/kg of total polycyclic aromatic hydrocarbons (PAHs). The field samples collected 6 and 62 days after the spill contained about 71,000 and 28,000 mg/kg of TPH and 4854 and 4016 mg/kg of total PAHs, respectively. The field samples had a relatively large percentage of heavy PAHs, and most of these PAHs are highly toxic compounds that are difficult to weather and their long-term effects on coastal ecosystems are largely unknown. Therefore, more detailed studies are needed to monitor and track the long term environmental impacts of the Chennai oil spill residues on the Bay of Bengal coastal ecosystem. Copyright © 2018 Elsevier B.V. All rights reserved.
Optimal weighting in fNL constraints from large scale structure in an idealised case
NASA Astrophysics Data System (ADS)
Slosar, Anže
2009-03-01
We consider the problem of optimal weighting of tracers of structure for the purpose of constraining the non-Gaussianity parameter fNL. We work within the Fisher matrix formalism expanded around fiducial model with fNL = 0 and make several simplifying assumptions. By slicing a general sample into infinitely many samples with different biases, we derive the analytic expression for the relevant Fisher matrix element. We next consider weighting schemes that construct two effective samples from a single sample of tracers with a continuously varying bias. We show that a particularly simple ansatz for weighting functions can recover all information about fNL in the initial sample that is recoverable using a given bias observable and that simple division into two equal samples is considerably suboptimal when sampling of modes is good, but only marginally suboptimal in the limit where Poisson errors dominate.
Lourenço de Oliveira, Ricardo; Vazeille, Marie; de Filippis, Ana Maria Bispo; Failloux, Anna-Bella
2003-07-01
We conducted a population genetic analysis of Aedes albopictus collected from 20 sites in Brazil, the United States (Florida, Georgia, and Illinois), and the Cayman Islands. Using isoenzyme analysis, we examined genetic diversity and patterns of gene flow. High genetic differentiation was found among Brazilian samples, and between them and North American samples. Regression analysis of genetic differentiation according to geographic distances indicated that Ae. albopictus samples from Florida were genetically isolated by distance. Infection rates with dengue and yellow fever viruses showed greater differences between two Brazilian samples than between the two North American samples or between a Brazilian sample and a North American sample. Introductions and establishments of new Ae. albopictus populations in the Americas are still in progress, shaping population genetic composition and potentially modifying both dengue and yellow fever transmission patterns.
van Spronsen, F J; van Rijn, M; van Dijk, T; Smit, G P; Reijngoud, D J; Berger, R; Heymans, H S
1993-10-01
To evaluate the adequacy of dietary treatment in patients with phenylketonuria, the monitoring of plasma phenylalanine and tyrosine concentrations is of great importance. The preferable time of blood sampling in relation to the nutritional condition during the day, however, is not known. It was the aim of this study to define guidelines for the timing of blood sampling with a minimal burden for the patient. Plasma concentrations of phenylalanine and tyrosine were measured in nine patients with phenylketonuria who had no clinical evidence of tyrosine deficiency. These values were measured during the day both after a prolonged overnight fast, and before and after breakfast. Phenylalanine showed a small rise during prolonged fasting, while tyrosine decreased slightly. After an individually tailored breakfast, phenylalanine remained stable, while tyrosine showed large fluctuations. It is concluded that the patient's nutritional condition (fasting/postprandial) is not important in the evaluation of the phenylalanine intake. To detect a possible tyrosine deficiency, however, a single blood sample is not sufficient and a combination of a preprandial and postprandial blood sample on the same day is advocated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Treimer, Wolfgang; Ebrahimi, Omid; Karakas, Nursel
Polarized neutron radiography was used to study the three-dimensional magnetic flux distribution inside of single-crystal and polycrystalline Pb cylinders with large (cm3) volume and virtually zero demagnetization. Experiments with single crystals being in the Meissner phase (T
Extreme Quantum Memory Advantage for Rare-Event Sampling
NASA Astrophysics Data System (ADS)
Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.
2018-02-01
We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.
Ar-40/Ar-39 ages and cosmic ray exposure ages of Apollo 14 samples.
NASA Technical Reports Server (NTRS)
Turner, G.; Huneke, J. C.; Podosek, F. A.; Wasserburg, G. J.
1971-01-01
We have used the Ar-40/Ar-39 dating technique on eight samples of Apollo 14 rocks (14053, 14310), breccia fragments (14321), and soil fragments (14001, 14167). The large basalt fragments give reasonable Ar-40/Ar-39 release patterns and yield well defined crystallization ages of 3.89-3.95 aeons. Correlation of the Ar-40/Ar-39 release patterns with Ar-39/Ar-37 patterns showed that the low temperature fractions with high radiogenic argon loss came from K-rich phases. A highly shocked sample and fragments included in the breccia yield complex release patterns with a low temperature peak. The total argon age of these fragments is 3.95 aeons. Cosmic ray exposure ages on these samples are obtained from the ratio of spallogenic Ar-38 to reactor induced Ar-37 and show a distinct grouping of low exposure ages of 26 m.y. correlated with Cone crater. Other samples have exposure ages of more than 260 m.y. and identify material with a more complex integrated cosmic age exposure history.
The relation between statistical power and inference in fMRI
Wager, Tor D.; Yarkoni, Tal
2017-01-01
Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects), and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial—especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20–30) display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate) prediction methods and meta-analyses with related synthesis-oriented approaches. PMID:29155843
Alternative Gravity Rotation Curves for the LITTLE THINGS Survey
NASA Astrophysics Data System (ADS)
O’Brien, James G.; Chiarelli, Thomas L.; Dentico, Jeremy; Stulge, Modestas; Stefanski, Brian; Moss, Robert; Chaykov, Spasen
2018-01-01
Galactic rotation curves have proven to be the testing ground for dark matter bounds in spiral galaxies of all morphologies. Dwarf galaxies serve as an increasingly interesting case of rotation curve dynamics due to their typically rising rotation curve as opposed to the flattening curve of large spirals. Dwarf galaxies usually vary in galactic structure and mostly terminate at small radial distances. This, coupled with the fact that Cold Dark Matter theories struggle with the universality of galactic rotation curves, allow for exclusive features of alternative gravitational models to be analyzed. Recently, The H I Nearby Galactic Survey (THINGS) has been extended to include a sample of 25 dwarf galaxies now known as the LITTLE THINGS Survey. Here, we show an application of alternative gravitational models to the LITTLE THINGS survey, specifically focusing on conformal gravity (CG) and Modified Newtonian Dynamics (MOND). In this work, we provide an analysis and discussion of the rotation curve predictions of each theory to the sample. Furthermore, we show how these two alternative gravitational models account for the recently observed universal trends in centripetal accelerations in spiral galaxies. This work highlights the similarities and differences of the predictions of the two theories in dwarf galaxies. The sample is not large or diverse enough to strongly favor a single theory, but we posit that both CG and MOND can provide an accurate description of the galactic dynamics in the LITTLE THINGS sample without the need for dark matter.
Protection of surface states in topological nanoparticles
NASA Astrophysics Data System (ADS)
Siroki, Gleb; Haynes, Peter D.; Lee, Derek K. K.; Giannini, Vincenzo
2017-07-01
Topological insulators host protected electronic states at their surface. These states show little sensitivity to disorder. For miniaturization one wants to exploit their robustness at the smallest sizes possible. This is also beneficial for optical applications and catalysis, which favor large surface-to-volume ratios. However, it is not known whether discrete states in particles share the protection of their continuous counterparts in large crystals. Here we study the protection of the states hosted by topological insulator nanoparticles. Using both analytical and tight-binding simulations, we show that the states benefit from the same level of protection as those on a planar surface. The results hold for many shapes and sustain surface roughness which may be useful in photonics, spectroscopy, and chemistry. They complement past studies of large crystals—at the other end of possible length scales. The protection of the nanoparticles suggests that samples of all intermediate sizes also possess protected states.
Large-scale structure after COBE: Peculiar velocities and correlations of cold dark matter halos
NASA Technical Reports Server (NTRS)
Zurek, Wojciech H.; Quinn, Peter J.; Salmon, John K.; Warren, Michael S.
1994-01-01
Large N-body simulations on parallel supercomputers allow one to simultaneously investigate large-scale structure and the formation of galactic halos with unprecedented resolution. Our study shows that the masses as well as the spatial distribution of halos on scales of tens of megaparsecs in a cold dark matter (CDM) universe with the spectrum normalized to the anisotropies detected by Cosmic Background Explorer (COBE) is compatible with the observations. We also show that the average value of the relative pairwise velocity dispersion sigma(sub v) - used as a principal argument against COBE-normalized CDM models-is significantly lower for halos than for individual particles. When the observational methods of extracting sigma(sub v) are applied to the redshift catalogs obtained from the numerical experiments, estimates differ significantly between different observation-sized samples and overlap observational estimates obtained following the same procedure.
Karayanni, Hera; Christaki, Urania; Van Wambeke, France; Dalby, Andrew P
2004-03-01
Ciliated protozoa are potential grazers of primary and bacterial production and act as intermediaries between picoplankton and copepods and other large suspension feeders. Accurate determination of ciliate abundance and feeding mode is crucial in oceanic carbon budget estimations. However, the impact of different fixatives on the abundance and cell volume of ciliates has been investigated in only a few studies using either laboratory cultures or natural populations. Lugol's solution and formalin are the most commonly used fixatives for the preservation of ciliates samples. In the present study, the aim was to compare 0.4% Lugol's solution and 2% borated-formalin fixation and evaluate the need of counting duplicate samples each using a different fixative. For this, a large number of samples (n = 110) from the NE Atlantic was analyzed in the frame of POMME program (Multidisciplinary Mesoscale Ocean Program). We established a statistically significant relationship (p < 0.0001) between Lugol's and formalin fixed samples for both abundance (r2 = 0.50) and biomass (r2 = 0.76) of aloricate ciliates which showed that counts were higher in Lugol's solution by a factor of 2 and a non-taxon specific cell-loss in formalin. However, loricate ciliate abundance in our samples which were represented primarily by Tintinnus spp. did not show any difference between the two treatments. Abundance and biomass of mixotrophic ciliates (chloroplast-bearing cells) were for various reasons underestimated in both treatments. Our results show that unique fixation by formalin may severely underestimate ciliates abundance and biomass although their population may not alter. For this reason, Lugol's solution is best for the estimation of their abundance and biomass. However, for counts of mixotrophs and the evaluation of the ecological role of ciliates in carbon flux, double fixation is essential. Compromises regarding the fixatives have lead to severe underestimations of mixotrophs in studies conducted by now.
Conley, Stephen; Faloona, Ian; Mehrotra, Shobhit; ...
2017-09-13
Airborne estimates of greenhouse gas emissions are becoming more prevalent with the advent of rapid commercial development of trace gas instrumentation featuring increased measurement accuracy, precision, and frequency, and the swelling interest in the verification of current emission inventories. Multiple airborne studies have indicated that emission inventories may underestimate some hydrocarbon emission sources in US oil- and gas-producing basins. Consequently, a proper assessment of the accuracy of these airborne methods is crucial to interpreting the meaning of such discrepancies. We present a new method of sampling surface sources of any trace gas for which fast and precise measurements can be mademore » and apply it to methane, ethane, and carbon dioxide on spatial scales of ~1000 m, where consecutive loops are flown around a targeted source region at multiple altitudes. Using Reynolds decomposition for the scalar concentrations, along with Gauss's theorem, we show that the method accurately accounts for the smaller-scale turbulent dispersion of the local plume, which is often ignored in other average mass balance methods. With the help of large eddy simulations (LES) we further show how the circling radius can be optimized for the micrometeorological conditions encountered during any flight. Furthermore, by sampling controlled releases of methane and ethane on the ground we can ascertain that the accuracy of the method, in appropriate meteorological conditions, is often better than 10 %, with limits of detection below 5 kg h -1 for both methane and ethane. Because of the FAA-mandated minimum flight safe altitude of 150 m, placement of the aircraft is critical to preventing a large portion of the emission plume from flowing underneath the lowest aircraft sampling altitude, which is generally the leading source of uncertainty in these measurements. Finally, we show how the accuracy of the method is strongly dependent on the number of sampling loops and/or time spent sampling the source plume.« less
NASA Astrophysics Data System (ADS)
Conley, Stephen; Faloona, Ian; Mehrotra, Shobhit; Suard, Maxime; Lenschow, Donald H.; Sweeney, Colm; Herndon, Scott; Schwietzke, Stefan; Pétron, Gabrielle; Pifer, Justin; Kort, Eric A.; Schnell, Russell
2017-09-01
Airborne estimates of greenhouse gas emissions are becoming more prevalent with the advent of rapid commercial development of trace gas instrumentation featuring increased measurement accuracy, precision, and frequency, and the swelling interest in the verification of current emission inventories. Multiple airborne studies have indicated that emission inventories may underestimate some hydrocarbon emission sources in US oil- and gas-producing basins. Consequently, a proper assessment of the accuracy of these airborne methods is crucial to interpreting the meaning of such discrepancies. We present a new method of sampling surface sources of any trace gas for which fast and precise measurements can be made and apply it to methane, ethane, and carbon dioxide on spatial scales of ˜ 1000 m, where consecutive loops are flown around a targeted source region at multiple altitudes. Using Reynolds decomposition for the scalar concentrations, along with Gauss's theorem, we show that the method accurately accounts for the smaller-scale turbulent dispersion of the local plume, which is often ignored in other average mass balance
methods. With the help of large eddy simulations (LES) we further show how the circling radius can be optimized for the micrometeorological conditions encountered during any flight. Furthermore, by sampling controlled releases of methane and ethane on the ground we can ascertain that the accuracy of the method, in appropriate meteorological conditions, is often better than 10 %, with limits of detection below 5 kg h-1 for both methane and ethane. Because of the FAA-mandated minimum flight safe altitude of 150 m, placement of the aircraft is critical to preventing a large portion of the emission plume from flowing underneath the lowest aircraft sampling altitude, which is generally the leading source of uncertainty in these measurements. Finally, we show how the accuracy of the method is strongly dependent on the number of sampling loops and/or time spent sampling the source plume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conley, Stephen; Faloona, Ian; Mehrotra, Shobhit
Airborne estimates of greenhouse gas emissions are becoming more prevalent with the advent of rapid commercial development of trace gas instrumentation featuring increased measurement accuracy, precision, and frequency, and the swelling interest in the verification of current emission inventories. Multiple airborne studies have indicated that emission inventories may underestimate some hydrocarbon emission sources in US oil- and gas-producing basins. Consequently, a proper assessment of the accuracy of these airborne methods is crucial to interpreting the meaning of such discrepancies. We present a new method of sampling surface sources of any trace gas for which fast and precise measurements can be mademore » and apply it to methane, ethane, and carbon dioxide on spatial scales of ~1000 m, where consecutive loops are flown around a targeted source region at multiple altitudes. Using Reynolds decomposition for the scalar concentrations, along with Gauss's theorem, we show that the method accurately accounts for the smaller-scale turbulent dispersion of the local plume, which is often ignored in other average mass balance methods. With the help of large eddy simulations (LES) we further show how the circling radius can be optimized for the micrometeorological conditions encountered during any flight. Furthermore, by sampling controlled releases of methane and ethane on the ground we can ascertain that the accuracy of the method, in appropriate meteorological conditions, is often better than 10 %, with limits of detection below 5 kg h -1 for both methane and ethane. Because of the FAA-mandated minimum flight safe altitude of 150 m, placement of the aircraft is critical to preventing a large portion of the emission plume from flowing underneath the lowest aircraft sampling altitude, which is generally the leading source of uncertainty in these measurements. Finally, we show how the accuracy of the method is strongly dependent on the number of sampling loops and/or time spent sampling the source plume.« less
Loukas, Christos-Moritz; Mowlem, Matthew C; Tsaloglou, Maria-Nefeli; Green, Nicolas G
2018-05-01
This paper presents a novel portable sample filtration/concentration system, designed for use on samples of microorganisms with very low cell concentrations and large volumes, such as water-borne parasites, pathogens associated with faecal matter, or toxic phytoplankton. The example application used for demonstration was the in-field collection and concentration of microalgae from seawater samples. This type of organism is responsible for Harmful Algal Blooms (HABs), an example of which is commonly referred to as "red tides", which are typically the result of rapid proliferation and high biomass accumulation of harmful microalgal species in the water column or at the sea surface. For instance, Karenia brevis red tides are the cause of aquatic organism mortality and persistent blooms may cause widespread die-offs of populations of other organisms including vertebrates. In order to respond to, and adequately manage HABs, monitoring of toxic microalgae is required and large-volume sample concentrators would be a useful tool for in situ monitoring of HABs. The filtering system presented in this work enables consistent sample collection and concentration from 1 L to 1 mL in five minutes, allowing for subsequent benchtop sample extraction and analysis using molecular methods such as NASBA and IC-NASBA. The microalga Tetraselmis suecica was successfully detected at concentrations ranging from 2 × 10 5 cells/L to 20 cells/L. Karenia brevis was also detected and quantified at concentrations between 10 cells/L and 10 6 cells/L. Further analysis showed that the filter system, which concentrates cells from very large volumes with consequently more reliable sampling, produced samples that were more consistent than the independent non-filtered samples (benchtop controls), with a logarithmic dependency on increasing cell numbers. This filtering system provides simple, rapid, and consistent sample collection and concentration for further analysis, and could be applied to a wide range of different samples and target organisms in situations lacking laboratories. Copyright © 2018. Published by Elsevier B.V.
How are mood and exercise related? Results from the Finnmark study.
Sexton, H; Søgaard, A J; Olstad, R
2001-07-01
Recreational exercise and mood have frequently been correlated in population studies. Although it is often assumed that recreational exercise improves mood, this has not been consistently demonstrated in population studies. The relationship between mood and exercise was studied prospectively in a community sample. A series of synchronous panel models was constructed in two samples (2798 paired observations; sample I = 1219, sample II = 1498) to examine this relationship in the entire population, for women and men separately, for those with sedentary occupations, for those performing physical labour, and for those who initially showed a more dysphoric mood. Although mood and exercise were correlated, the only directional relationship that could be demonstrated was that recreational exercise had an inconsistently positive effect upon mood in those with sedentary occupations. There was no such relationship between doing physical work and mood. Analyses of those who initially showed higher levels of dysphoria did not uncover any directional relationship between mood and exercise. None of the other subgroups showed any directional effects between mood and recreational exercise, nor did the population as a whole. The relationship between exercise and mood in this population sample appears to be largely correlational in nature. This result suggests the need to take a cautious view of the role played by exercise in promoting mood in the general population.
Stability over Time of Different Methods of Estimating School Performance
ERIC Educational Resources Information Center
Dumay, Xavier; Coe, Rob; Anumendem, Dickson Nkafu
2014-01-01
This paper aims to investigate how stability varies with the approach used in estimating school performance in a large sample of English primary schools. The results show that (a) raw performance is considerably more stable than adjusted performance, which in turn is slightly more stable than growth model estimates; (b) schools' performance…
ERIC Educational Resources Information Center
Nie, Youyan; Lau, Shun
2009-01-01
This study examined how classroom management practices--care and behavioral control--were differentially associated with students' engagement, misbehavior, and satisfaction with school, using a large representative sample of 3196 Grade 9 students from 117 classes in Singapore. Results of hierarchical linear modeling showed differential relations.…
Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric
2010-01-01
It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140
Marsden, J. R.; Dawson, I. M. P.
1974-01-01
Histochemical enzymatic studies were performed on 30 freshly resected large bowel carcinomas, 30 samples of normal colonic epithelium, and six samples of the histologically normal epithelium (so-called transitional epithelium) immediately adjacent to a carcinoma. Five enzymes were studied: nicotine adenine dinucleotide tetrazolium reductase (NADH-TR), glucose-6-phosphate dehydrogenase, succinate dehydrogenase, monoamine oxidase, and acid phosphatase. Quantitative and qualitative differences in enzyme activity were observed between normal, transitional, and carcinomatous mucosa as follows: monoamine oxidase activity was moderate in normal mucosa, high in transitional mucosa, and low in carcinoma. Succinate dehydrogenase activity was high in transitional mucosa and low or moderate in normal and carcinomatous mucosa. Glucose-6-phosphate dehydrogenase activity showed a gradation from low in normal mucosa to high in carcinoma while acid phosphatase showed the reverse of this pattern. The tetrazolium reductase activity was low or moderate in normal and transitional mucosa and high in carcinoma. These differences in enzyme activity and their possible clinical and metabolic significance are discussed. ImagesFig 2Fig 3 PMID:4154840
Near real-time monitoring and mapping of specific conductivity levels across Lake Texoma, USA
Atkinson, S.F.; Mabe, J.A.
2006-01-01
A submersible sonde equipped with a specific conductivity probe, linked with a global positioning satellite receiver was developed, deployed on a small boat, and used to map spatial and temporal variations in specific conductivity in a large reservoir. 7,695 sample points were recorded during 8 sampling trips. Specific conductivity ranged from 442 uS/cm to 3,378 uS/cm over the nine-month study. The data showed five statistically different zones in the reservoir: 2 different riverine zones, 2 different riverine transition zones, and a lacustrine zone (the main lake zone). These data were imported to a geographic information system where they were spatially interpolated to generate 8 maps showing specific conductivity levels across the entire surface of the lake. The highly dynamic nature of water quality, due to the widely differing nature of the rivers that flow into the reservoir and the effect of large inflows of fresh water during winter storms is easily captured and visualized using this approach. ?? Springer Science+Business Media, Inc. 2006.
Mróz, Tomasz; Szufa, Katarzyna; Frontasyeva, Marina V; Tselmovich, Vladimir; Ostrovnaya, Tatiana; Kornaś, Andrzej; Olech, Maria A; Mietelski, Jerzy W; Brudecki, Kamil
2018-01-01
Seven lichens (Usnea antarctica and U. aurantiacoatra) and nine moss samples (Sanionia uncinata) collected in King George Island were analyzed using instrumental neutron activation analysis, and concentration of major and trace elements was calculated. For some elements, the concentrations observed in moss samples were higher than corresponding values reported from other sites in the Antarctica, but in the lichens, these were in the same range of concentrations. Scanning electron microscopy (SEM) and statistical analysis showed large influence of volcanic-origin particles. Also, the interplanetary cosmic particles (ICP) were observed in investigated samples, as mosses and lichens are good collectors of ICP and micrometeorites.
Predicting fecal indicator organism contamination in Oregon coastal streams.
Pettus, Paul; Foster, Eugene; Pan, Yangdong
2015-12-01
In this study, we used publicly available GIS layers and statistical tree-based modeling (CART and Random Forest) to predict pathogen indicator counts at a regional scale using 88 spatially explicit landscape predictors and 6657 samples from non-estuarine streams in the Oregon Coast Range. A total of 532 frequently sampled sites were parsed down to 93 pathogen sampling sites to control for spatial and temporal biases. This model's 56.5% explanation of variance, was comparable to other regional models, while still including a large number of variables. Analysis showed the most important predictors on bacteria counts to be: forest and natural riparian zones, cattle related activities, and urban land uses. This research confirmed linkages to anthropogenic activities, with the research prediction mapping showing increased bacteria counts in agricultural and urban land use areas and lower counts with more natural riparian conditions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Herrmann, Alexander; Haake, Andrea; Ammerpohl, Ole; Martin-Guerrero, Idoia; Szafranski, Karol; Stemshorn, Kathryn; Nothnagel, Michael; Kotsopoulos, Steve K; Richter, Julia; Warner, Jason; Olson, Jeff; Link, Darren R; Schreiber, Stefan; Krawczak, Michael; Platzer, Matthias; Nürnberg, Peter; Siebert, Reiner; Hampe, Jochen
2011-01-01
Cytosine methylation provides an epigenetic level of cellular plasticity that is important for development, differentiation and cancerogenesis. We adopted microdroplet PCR to bisulfite treated target DNA in combination with second generation sequencing to simultaneously assess DNA sequence and methylation. We show measurement of methylation status in a wide range of target sequences (total 34 kb) with an average coverage of 95% (median 100%) and good correlation to the opposite strand (rho = 0.96) and to pyrosequencing (rho = 0.87). Data from lymphoma and colorectal cancer samples for SNRPN (imprinted gene), FGF6 (demethylated in the cancer samples) and HS3ST2 (methylated in the cancer samples) serve as a proof of principle showing the integration of SNP data and phased DNA-methylation information into "hepitypes" and thus the analysis of DNA methylation phylogeny in the somatic evolution of cancer.
A Genealogical Interpretation of Principal Components Analysis
McVean, Gil
2009-01-01
Principal components analysis, PCA, is a statistical method commonly used in population genetics to identify structure in the distribution of genetic variation across geographical location and ethnic background. However, while the method is often used to inform about historical demographic processes, little is known about the relationship between fundamental demographic parameters and the projection of samples onto the primary axes. Here I show that for SNP data the projection of samples onto the principal components can be obtained directly from considering the average coalescent times between pairs of haploid genomes. The result provides a framework for interpreting PCA projections in terms of underlying processes, including migration, geographical isolation, and admixture. I also demonstrate a link between PCA and Wright's fst and show that SNP ascertainment has a largely simple and predictable effect on the projection of samples. Using examples from human genetics, I discuss the application of these results to empirical data and the implications for inference. PMID:19834557
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gould, Andrew; Yee, Jennifer C., E-mail: gould@astronomy.ohio-state.edu, E-mail: jyee@astronomy.ohio-state.edu
While of order of a million asteroids have been discovered, the number in rigorously controlled samples that have precise orbits and rotation periods, as well as well-measured colors, is relatively small. In particular, less than a dozen main-belt asteroids with estimated diameters D < 3 km have excellent rotation periods. We show how existing and soon-to-be-acquired microlensing data can yield a large asteroid sample with precise orbits and rotation periods, which will include roughly 6% of all asteroids with maximum brightness I < 18.1 and lying within 10 Degree-Sign of the ecliptic. This sample will be dominated by small andmore » very small asteroids, down to D {approx} 1 km. We also show how asteroid astrometry could turn current narrow-angle OGLE proper motions of bulge stars into wide-angle proper motions. This would enable one to measure the proper-motion gradient across the Galactic bar.« less
Effect of substrate temperature in the synthesis of BN nanostructures
NASA Astrophysics Data System (ADS)
Sajjad, M.; Zhang, H. X.; Peng, X. Y.; Feng, P. X.
2011-06-01
Boron nitride (BN) nanostructures were grown on molybdenum discs at different substrate temperatures using the short-pulse laser plasma deposition technique. Large numbers of randomly oriented nanorods of fiber-like structures were obtained. The variation in the length and diameter of the nanorods as a function of the substrate temperature was systematically studied. The surface morphologies of the samples were studied using scanning electron microscopy. Energy dispersive x-ray spectroscopy confirmed that both the elements boron and nitrogen are dominant in the nanostructure. The x-ray diffraction (XRD) technique was used to analyse BN phases. The XRD peak that appeared at 26° showed the presence of hexagonal BN phase, whereas the peak at 44° was related to cubic BN content in the samples. Raman spectroscopic analysis showed vibrational modes of sp2- and sp3-type bonding in the sample. The Raman spectra agreed well with XRD results.
NASA Astrophysics Data System (ADS)
Heinemeier, Jan; Jungner, Högne; Lindroos, Alf; Ringbom, Åsa; von Konow, Thorborg; Rud, Niels
1997-03-01
A method for refining lime mortar samples for 14C dating has been developed. It includes mechanical and chemical separation of mortar carbonate with optical control of the purity of the samples. The method has been applied to a large series of AMS datings on lime mortar from three medieval churches on the Åland Islands, Finland. The datings show convincing internal consistency and confine the construction time of the churches to AD 1280-1380 with a most probable date just before AD 1300. We have also applied the method to the controversial Newport Tower, Rhode Island, USA. Our mortar datings confine the building to colonial time in the 17th century and thus refute claims of Viking origin of the tower. For the churches, a parallel series of datings of organic (charcoal) inclusions in the mortar show less reliable results than the mortar samples, which is ascribed to poor association with the construction time.
SRF niobium characterization using SIMS and FIB-TEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevie, F. A.
2015-12-04
Our understanding of superconducting radio frequency (SRF) accelerator cavities has been improved by elemental analysis at high depth resolution and by high magnification microscopy. This paper summarizes the technique development and the results obtained on poly-crystalline, large grain, and single crystal SRF niobium. Focused ion beam made possible sample preparation using transmission electron microscopy and the images obtained showed a very uniform oxide layer for all samples analyzed. Secondary ion mass spectrometry indicated the presence of a high concentration of hydrogen and the hydrogen content exhibited a relationship with improvement in performance. Depth profiles of carbon, nitrogen, and oxygen didmore » not show major differences with heat treatment. Niobium oxide less than 10 nm thick was shown to be an effective hydrogen barrier. Niobium with titanium contamination showed unexpected performance improvement.« less
Gloria, E M; Fonseca, H; Calori-Domingues, M A; Souza, I M
1998-01-01
The results of the black light test for aflatoxin-contaminated maize carried out in a large food factory in the State of São Paulo was evaluated against bi-directional thin layer chromatography (TLC) analysis for 286 samples of maize. All 286 samples were accepted by the black light test (< 7 fluorescent points), however, the results from TLC analysis showed that 96 samples were contaminated and 14 showed aflatoxin B1 contamination levels higher than 20 micrograms/kg. There were 14 false negative results and no false positives and out of the 14 samples, six did not show visible fluorescent points. If the rejection criterion of one or more fluorescent points were applied, the six samples would be accepted by the black light test. But, in this case, 95 samples would be rejected and 87 results would be false positives because they did not have contamination levels over 20 micrograms/kg which is the acceptance limit of the black light test. The results indicate that the black light test, as utilized by this factory, was not able to indicate lots with possible contamination and the black light test, as recommended in the literature, would produce a high number of false positives. It is necessary to make more studies on the use of black light as a screening test for possible aflatoxin B1-contaminated maize.
Radiometric 81Kr dating identifies 120,000-year-old ice at Taylor Glacier, Antarctica
Buizert, Christo; Baggenstos, Daniel; Jiang, Wei; Purtschert, Roland; Petrenko, Vasilii V.; Lu, Zheng-Tian; Müller, Peter; Kuhl, Tanner; Lee, James; Severinghaus, Jeffrey P.; Brook, Edward J.
2014-01-01
We present successful 81Kr-Kr radiometric dating of ancient polar ice. Krypton was extracted from the air bubbles in four ∼350-kg polar ice samples from Taylor Glacier in the McMurdo Dry Valleys, Antarctica, and dated using Atom Trap Trace Analysis (ATTA). The 81Kr radiometric ages agree with independent age estimates obtained from stratigraphic dating techniques with a mean absolute age offset of 6 ± 2.5 ka. Our experimental methods and sampling strategy are validated by (i) 85Kr and 39Ar analyses that show the samples to be free of modern air contamination and (ii) air content measurements that show the ice did not experience gas loss. We estimate the error in the 81Kr ages due to past geomagnetic variability to be below 3 ka. We show that ice from the previous interglacial period (Marine Isotope Stage 5e, 130–115 ka before present) can be found in abundance near the surface of Taylor Glacier. Our study paves the way for reliable radiometric dating of ancient ice in blue ice areas and margin sites where large samples are available, greatly enhancing their scientific value as archives of old ice and meteorites. At present, ATTA 81Kr analysis requires a 40–80-kg ice sample; as sample requirements continue to decrease, 81Kr dating of ice cores is a future possibility. PMID:24753606
Radiometric 81Kr dating identifies 120,000-year-old ice at Taylor Glacier, Antarctica.
Buizert, Christo; Baggenstos, Daniel; Jiang, Wei; Purtschert, Roland; Petrenko, Vasilii V; Lu, Zheng-Tian; Müller, Peter; Kuhl, Tanner; Lee, James; Severinghaus, Jeffrey P; Brook, Edward J
2014-05-13
We present successful (81)Kr-Kr radiometric dating of ancient polar ice. Krypton was extracted from the air bubbles in four ∼350-kg polar ice samples from Taylor Glacier in the McMurdo Dry Valleys, Antarctica, and dated using Atom Trap Trace Analysis (ATTA). The (81)Kr radiometric ages agree with independent age estimates obtained from stratigraphic dating techniques with a mean absolute age offset of 6 ± 2.5 ka. Our experimental methods and sampling strategy are validated by (i) (85)Kr and (39)Ar analyses that show the samples to be free of modern air contamination and (ii) air content measurements that show the ice did not experience gas loss. We estimate the error in the (81)Kr ages due to past geomagnetic variability to be below 3 ka. We show that ice from the previous interglacial period (Marine Isotope Stage 5e, 130-115 ka before present) can be found in abundance near the surface of Taylor Glacier. Our study paves the way for reliable radiometric dating of ancient ice in blue ice areas and margin sites where large samples are available, greatly enhancing their scientific value as archives of old ice and meteorites. At present, ATTA (81)Kr analysis requires a 40-80-kg ice sample; as sample requirements continue to decrease, (81)Kr dating of ice cores is a future possibility.
Klomp, Johanna M; Verbruggen, Banut-Sabine M; Korporaal, Hans; Boon, Mathilde E; de Jong, Pauline; Kramer, Gerco C; van Haaften, Maarten; Heintz, A Peter M
2008-05-01
Our objective was to determine the morphotype of the adherent bacteria in liquid-based cytology (LBC) in smears with healthy and disturbed vaginal flora. And to use PCR technology on the same fixed cell sample to establish DNA patterns of the 16S RNA genes of the bacteria in the sample. Thirty samples were randomly selected from a large group of cervical cell samples suspended in a commercial coagulant fixative "(BoonFix)." PCR was used to amplify DNA of five bacterial species: Lactobacillus acidophilus, Lactobacillus crispatus, Lactobacillus jensenii, Gardnerella vaginalis, and Mycoplasma hominis. The LBC slides were then analyzed by light microscopy to estimate bacterial adhesion. DNA of lactobacilli was detected in all cell samples. Seventeen smears showed colonization with Gardnerella vaginalis (range 2.6 x 10(2)-3.0 x 10(5) bacteria/mul BoonFix sample). Two cases were identified as dysbacteriotic with high DNA values for Gardnerella vaginalis and low values for Lactobacillus crispatus. The sample with the highest concentration for Gardnerella vaginalis showed an unequivocal Gardnerella infection. This study indicates that the adherence pattern of a disturbed flora in liquid-based cervical samples can be identified unequivocally, and that these samples are suitable for quantitative PCR analysis. This cultivation independent method reveals a strong inverse relationship between Gardnerella vaginalis and Lactobacillus crispatus in dysbacteriosis and unequivocal Gardnerella infection.
[Generalization of money-handling though training in equivalence relationships].
Vives-Montero, Carmen; Valero-Aguayo, Luis; Ascanio, Lourdes
2011-02-01
This research used a matching-to-sample procedure and equivalence learning process with language and verbal tasks. In the study, an application of the equivalence relationship of money was used with several kinds of euro coins presented. The sample consisted of 16 children (8 in the experimental group and 8 in the control group) aged 5 years. The prerequisite behaviors, the identification of coins and the practical use of different euro coins, were assessed in the pre and post phases for both groups. The children in the experimental group performed an equivalence task using the matching-to-sample procedure. This consisted of a stimulus sample and four matching stimuli, using a series of euro coins with equivalent value in each set. The children in the control group did not undergo this training process. The results showed a large variability in the children's data of the equivalence tests. The experimental group showed the greatest pre and post changes in the statistically significant data. They also showed a greater generalization in the identification of money and in the use of euro coins than the control group. The implications for educational training and the characteristics of the procedure used here for coin equivalence are discussed.
Investigation of spectral analysis techniques for randomly sampled velocimetry data
NASA Technical Reports Server (NTRS)
Sree, Dave
1993-01-01
It is well known that velocimetry (LV) generates individual realization velocity data that are randomly or unevenly sampled in time. Spectral analysis of such data to obtain the turbulence spectra, and hence turbulence scales information, requires special techniques. The 'slotting' technique of Mayo et al, also described by Roberts and Ajmani, and the 'Direct Transform' method of Gaster and Roberts are well known in the LV community. The slotting technique is faster than the direct transform method in computation. There are practical limitations, however, as to how a high frequency and accurate estimate can be made for a given mean sampling rate. These high frequency estimates are important in obtaining the microscale information of turbulence structure. It was found from previous studies that reliable spectral estimates can be made up to about the mean sampling frequency (mean data rate) or less. If the data were evenly samples, the frequency range would be half the sampling frequency (i.e. up to Nyquist frequency); otherwise, aliasing problem would occur. The mean data rate and the sample size (total number of points) basically limit the frequency range. Also, there are large variabilities or errors associated with the high frequency estimates from randomly sampled signals. Roberts and Ajmani proposed certain pre-filtering techniques to reduce these variabilities, but at the cost of low frequency estimates. The prefiltering acts as a high-pass filter. Further, Shapiro and Silverman showed theoretically that, for Poisson sampled signals, it is possible to obtain alias-free spectral estimates far beyond the mean sampling frequency. But the question is, how far? During his tenure under 1993 NASA-ASEE Summer Faculty Fellowship Program, the author investigated from his studies on the spectral analysis techniques for randomly sampled signals that the spectral estimates can be enhanced or improved up to about 4-5 times the mean sampling frequency by using a suitable prefiltering technique. But, this increased bandwidth comes at the cost of the lower frequency estimates. The studies further showed that large data sets of the order of 100,000 points, or more, high data rates, and Poisson sampling are very crucial for obtaining reliable spectral estimates from randomly sampled data, such as LV data. Some of the results of the current study are presented.
Martinez-Maza, Cayetana; Alberdi, Maria Teresa; Nieto-Diaz, Manuel; Prado, José Luis
2014-01-01
Histological analyses of fossil bones have provided clues on the growth patterns and life history traits of several extinct vertebrates that would be unavailable for classical morphological studies. We analyzed the bone histology of Hipparion to infer features of its life history traits and growth pattern. Microscope analysis of thin sections of a large sample of humeri, femora, tibiae and metapodials of Hipparion concudense from the upper Miocene site of Los Valles de Fuentidueña (Segovia, Spain) has shown that the number of growth marks is similar among the different limb bones, suggesting that equivalent skeletochronological inferences for this Hipparion population might be achieved by means of any of the elements studied. Considering their abundance, we conducted a skeletechronological study based on the large sample of third metapodials from Los Valles de Fuentidueña together with another large sample from the Upper Miocene locality of Concud (Teruel, Spain). The data obtained enabled us to distinguish four age groups in both samples and to determine that Hipparion concudense tended to reach skeletal maturity during its third year of life. Integration of bone microstructure and skeletochronological data allowed us to identify ontogenetic changes in bone structure and growth rate and to distinguish three histologic ontogenetic stages corresponding to immature, subadult and adult individuals. Data on secondary osteon density revealed an increase in bone remodeling throughout the ontogenetic stages and a lesser degree thereof in the Concud population, which indicates different biomechanical stresses in the two populations, likely due to environmental differences. Several individuals showed atypical growth patterns in the Concud sample, which may also reflect environmental differences between the two localities. Finally, classification of the specimens’ age within groups enabled us to characterize the age structure of both samples, which is typical of attritional assemblages. PMID:25098950
Characterization of air contaminants formed by the interaction of lava and sea water.
Kullman, G J; Jones, W G; Cornwell, R J; Parker, J E
1994-01-01
We made environmental measurements to characterize contaminants generated when basaltic lava from Hawaii's Kilauea volcano enters sea water. This interaction of lava with sea water produces large clouds of mist (LAZE). Island winds occasionally directed the LAZE toward the adjacent village of Kalapana and the Hawaii Volcanos National Park, creating health concerns. Environmental samples were taken to measure airborne concentrations of respirable dust, crystalline silica and other mineral compounds, fibers, trace metals, inorganic acids, and organic and inorganic gases. The LAZE contained quantifiable concentrations of hydrochloric acid (HCl) and hydrofluoric acid (HF); HCl was predominant. HCl and HF concentrations were highest in dense plumes of LAZE near the sea. The HCl concentration at this sampling location averaged 7.1 ppm; this exceeds the current occupational exposure ceiling of 5 ppm. HF was detected in nearly half the samples, but all concentrations were <1 ppm Sulfur dioxide was detected in one of four short-term indicator tube samples at approximately 1.5 ppm. Airborne particulates were composed largely of chloride salts (predominantly sodium chloride). Crystalline silica concentrations were below detectable limits, less than approximately 0.03 mg/m3 of air. Settled dust samples showed a predominance of glass flakes and glass fibers. Airborne fibers were detected at quantifiable levels in 1 of 11 samples. These fibers were composed largely of hydrated calcium sulfate. These findings suggest that individuals should avoid concentrated plumes of LAZE near its origin to prevent over exposure to inorganic acids, specifically HCl. Images Figure 1. Figure 2. Figure 3. Figure 4. A Figure 4. B Figure 4. C Figure 4. D PMID:8593853
High-Temperature Photoluminescence of CsPbX 3 (X = Cl, Br, I) Nanocrystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diroll, Benjamin T.; Nedelcu, Georgian; Kovalenko, Maksym
2017-03-30
Recent synthetic developments have generated intense interest in the use of cesium lead halide perovskite nanocrystals for light-emitting applications. This work presents the photoluminescence (PL) of cesium lead halide perovskite nanocrystals with tunable halide composition recorded as function of temperature from 80 to 550 K. CsPbBr 3 nanocrystals show the highest resilience to temperature while chloride-containing samples show relatively poorer preservation of photoluminescence at elevated temperatures. Thermal cycling experiments show that PL loss of CsPbBr 3 is largely reversible at temperatures below 450 K, but shows irreversible degradation at higher temperatures. Time-resolved measurements of CsPbX 3 samples show an increasemore » in the PL lifetime with temperature elevation, consistent with exciton fission to form free carriers, followed by a decrease in the apparent PL lifetime due to trapping. In conclusion, PL persistence measurements and time-resolved spectroscopies implicate thermally assisted trapping, most likely to halogen vacancy traps, as the mechanism of reversible PL loss.« less
Multispectral photoacoustic microscopy of lipids using a pulsed supercontinuum laser.
Buma, Takashi; Conley, Nicole C; Choi, Sang Won
2018-01-01
We demonstrate optical resolution photoacoustic microscopy (OR-PAM) of lipid-rich tissue between 1050-1714 nm using a pulsed supercontinuum laser based on a large-mode-area photonic crystal fiber. OR-PAM experiments of lipid-rich samples show the expected optical absorption peaks near 1210 and 1720 nm. These results show that pulsed supercontinuum lasers are promising for OR-PAM applications such as label-free histology of lipid-rich tissue and imaging small animal models of disease.
Tagalidou, Nektaria; Loderer, Viola; Distlberger, Eva; Laireiter, Anton-Rupert
2018-01-01
The present study investigates the feasibility of a humor training for a subclinical sample suffering from increased stress, depressiveness, or anxiety. Based on diagnostic interviews, 35 people were invited to participate in a 7-week humor training. Evaluation measures were filled in prior training, after training, and at a 1-month follow-up including humor related outcomes (coping humor and cheerfulness) and mental health-related outcomes (perceived stress, depressiveness, anxiety, and well-being). Outcomes were analyzed using repeated-measures ANOVAs. Within-group comparisons of intention-to-treat analysis showed main effects of time with large effect sizes on all outcomes. Post hoc tests showed medium to large effect sizes on all outcomes from pre to post and results remained stable until follow-up. Satisfaction with the training was high, attrition rate low (17.1%), and participants would highly recommend the training. Summarizing the results, the pilot study showed promising effects for people suffering from subclinical symptoms. All outcomes were positively influenced and showed stability over time. Humor trainings could be integrated more into mental health care as an innovative program to reduce stress whilst promoting also positive emotions. However, as this study was a single-arm pilot study, further research (including also randomized controlled trials) is still needed to evaluate the effects more profoundly. PMID:29740368
Assaraf, Roland
2014-12-01
We show that the recently proposed correlated sampling without reweighting procedure extends the locality (asymptotic independence of the system size) of a physical property to the statistical fluctuations of its estimator. This makes the approach potentially vastly more efficient for computing space-localized properties in large systems compared with standard correlated methods. A proof is given for a large collection of noninteracting fragments. Calculations on hydrogen chains suggest that this behavior holds not only for systems displaying short-range correlations, but also for systems with long-range correlations.
Large-format InGaAs focal plane arrays for SWIR imaging
NASA Astrophysics Data System (ADS)
Hood, Andrew D.; MacDougal, Michael H.; Manzo, Juan; Follman, David; Geske, Jonathan C.
2012-06-01
FLIR Electro Optical Components will present our latest developments in large InGaAs focal plane arrays, which are used for low light level imaging in the short wavelength infrared (SWIR) regime. FLIR will present imaging from their latest small pitch (15 μm) focal plane arrays in VGA and High Definition (HD) formats. FLIR will present characterization of the FPA including dark current measurements as well as the use of correlated double sampling to reduce read noise. FLIR will show imagery as well as FPA-level characterization data.
Large energy absorption in Ni-Mn-Ga/polymer composites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feuchtwanger, Jorge; Richard, Marc L.; Tang, Yun J.
2005-05-15
Ferromagnetic shape memory alloys can respond to a magnetic field or applied stress by the motion of twin boundaries and hence they show large hysteresis or energy loss. Ni-Mn-Ga particles made by spark erosion have been dispersed and oriented in a polymer matrix to form pseudo 3:1 composites which are studied under applied stress. Loss ratios have been determined from the stress-strain data. The loss ratios of the composites range from 63% to 67% compared to only about 17% for the pure, unfilled polymer samples.
Molecular dynamics based enhanced sampling of collective variables with very large time steps.
Chen, Pei-Yang; Tuckerman, Mark E
2018-01-14
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Molecular dynamics based enhanced sampling of collective variables with very large time steps
NASA Astrophysics Data System (ADS)
Chen, Pei-Yang; Tuckerman, Mark E.
2018-01-01
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Human Finger-Prick Induced Pluripotent Stem Cells Facilitate the Development of Stem Cell Banking
Tan, Hong-Kee; Toh, Cheng-Xu Delon; Ma, Dongrui; Yang, Binxia; Liu, Tong Ming; Lu, Jun; Wong, Chee-Wai; Tan, Tze-Kai; Li, Hu; Syn, Christopher; Tan, Eng-Lee; Lim, Bing; Lim, Yoon-Pin; Cook, Stuart A.
2014-01-01
Induced pluripotent stem cells (iPSCs) derived from somatic cells of patients can be a good model for studying human diseases and for future therapeutic regenerative medicine. Current initiatives to establish human iPSC (hiPSC) banking face challenges in recruiting large numbers of donors with diverse diseased, genetic, and phenotypic representations. In this study, we describe the efficient derivation of transgene-free hiPSCs from human finger-prick blood. Finger-prick sample collection can be performed on a “do-it-yourself” basis by donors and sent to the hiPSC facility for reprogramming. We show that single-drop volumes of finger-prick samples are sufficient for performing cellular reprogramming, DNA sequencing, and blood serotyping in parallel. Our novel strategy has the potential to facilitate the development of large-scale hiPSC banking worldwide. PMID:24646489
Results from the REFLEX Cluster Survey
NASA Astrophysics Data System (ADS)
Bohringer, H.; Guzzo, L.; Collins, C. A.; Neumann, D. M.; Schindler, S.; Schuecker, P.; Cruddace, R.; Chincarini, G.; de Grandi, S.; Edge, A. C.; MacGillivray, H. T.; Shaver, P.; Vettolani, G.; Voges, W.
Based on the ROSAT All-Sky Survey we have conducted a large redshift survey as an ESO key programme to identify and secure redshifts for the X-ray brightest clusters found in the southern hemisphere. We present first results for a highly controlled sample for a flux limit of 3cdot 10^{-12} erg s^{-1} cm^{-2} (0.1 - 2.4 keV) comprising 475 clusters (87% with redshifts). The logN-logS function of the sample shows an almost perfect Euclidian slope and a preliminary X-ray luminosity function is presented.
Scanning tunneling spectroscopy under large current flow through the sample.
Maldonado, A; Guillamón, I; Suderow, H; Vieira, S
2011-07-01
We describe a method to make scanning tunneling microscopy/spectroscopy imaging at very low temperatures while driving a constant electric current up to some tens of mA through the sample. It gives a new local probe, which we term current driven scanning tunneling microscopy/spectroscopy. We show spectroscopic and topographic measurements under the application of a current in superconducting Al and NbSe(2) at 100 mK. Perspective of applications of this local imaging method includes local vortex motion experiments, and Doppler shift local density of states studies.
Lam, S S
2001-02-01
In 1990 Podsakoff, MacKenzie, Moorman, and Fetter developed a scale to measure the five dimensions of organizational citizenship behavior. Test-retest data over 15 weeks are reported for this scale for a sample of 82 female and 32 male Chinese tellers (ages 18 to 54 years) from a large international bank in Hong Kong. Stability was .83, and there was no significant change between Times 1 and 2. Analysis indicated the five-factor structure and showed it to be a reliable measure when used with a nonwestern sample.
Nowcasting and Forecasting the Monthly Food Stamps Data in the US Using Online Search Data
Fantazzini, Dean
2014-01-01
We propose the use of Google online search data for nowcasting and forecasting the number of food stamps recipients. We perform a large out-of-sample forecasting exercise with almost 3000 competing models with forecast horizons up to 2 years ahead, and we show that models including Google search data statistically outperform the competing models at all considered horizons. These results hold also with several robustness checks, considering alternative keywords, a falsification test, different out-of-samples, directional accuracy and forecasts at the state-level. PMID:25369315
Inorganic material profiling using Arn+ cluster: Can we achieve high quality profiles?
NASA Astrophysics Data System (ADS)
Conard, T.; Fleischmann, C.; Havelund, R.; Franquet, A.; Poleunis, C.; Delcorte, A.; Vandervorst, W.
2018-06-01
Retrieving molecular information by sputtering of organic systems has been concretized in the last years due to the introduction of sputtering by large gas clusters which drastically eliminated the compound degradation during the analysis and has led to strong improvements in depth resolution. Rapidly however, a limitation was observed for heterogeneous systems where inorganic layers or structures needed to be profiled concurrently. As opposed to organic material, erosion of the inorganic layer appears very difficult and prone to many artefacts. To shed some light on these problems we investigated a simple system consisting of aluminum delta layer(s) buried in a silicon matrix in order to define the most favorable beam conditions for practical analysis. We show that counterintuitive to the small energy/atom used and unlike monoatomic ion sputtering, the information depth obtained with large cluster ions is typically very large (∼10 nm) and that this can be caused both by a large roughness development at early stages of the sputtering process and by a large mixing zone. As a consequence, a large deformation of the Al intensity profile is observed. Using sample rotation during profiling significantly improves the depth resolution while sample temperature has no significant effect. The determining parameter for high depth resolution still remains the total energy of the cluster instead of the energy per atom in the cluster.
NASA Astrophysics Data System (ADS)
Yilmaz, T. I.; Hess, K. U.; Vasseur, J.; Wadsworth, F. B.; Gilg, H. A.; Nakada, S.; Dingwell, D. B.
2017-12-01
When hot magma intrudes the crust, the surrounding rocks expand. Similarly, the cooling magma contracts. The expansion and contraction of these multiphase materials is not simple and often requires empirical constraint. Therefore, we constrained the thermal expansivity of Unzen dome and conduit samples using a NETZSCH® DIL 402C. Following experiments, those samples were scanned using a Phoenix v|tome|x m to observe the cracks that may have developed during the heating and cooling. The dome samples do not show petrological or chemical signs of alteration. However, the alteration of the conduit dykes is represented by the occurrence of the main secondary phases such as chlorite, sulfides, carbonates, R1 (Reichweite parameter) illite-smectite, and kaolinite. These alteration products indicate an (I) early weak to moderate argillic magmatic alteration, and a (II) second stage weak to moderate propylitic hydrothermal alteration. The linear thermal expansion coefficient aL of the dome material is K-1 between 150° and 800°C and shows a sharp peak of up to K-1 around the alpha-beta-quartz-transition ( 573°C). In contrast, aL of the hydrothermally altered conduit samples starts to increase around 180° and reaches K-1 at 400°C. We interpret this effect as being due to the water content of the kaolinite and the R1 illite-smectite, which induces larger expansions per degree temperature change. Furthermore, the altered conduit samples show a more pronounced increases of aL between 500 and 650°C of up to peaks at K-1, which is generated by the breakdown of chlorite, iron-rich dolomite solid solutions, calcite, and pyrite. We use a 1D conductive model of heat transfer to explore how the country rock around the Unzen conduit zone would heat up after intrusion. In turn, we convert these temperature profiles to thermal stress profiles, assuming the edifice is largely undeformable. We show that these high linear thermal expansion coefficients of the hydrothermally altered conduit rocks may large induce thermal stresses in the surrounding host rock and therefore promotes cracking, which may in turn lead to edifice instability.
Gray, Nicola; Lewis, Matthew R; Plumb, Robert S; Wilson, Ian D; Nicholson, Jeremy K
2015-06-05
A new generation of metabolic phenotyping centers are being created to meet the increasing demands of personalized healthcare, and this has resulted in a major requirement for economical, high-throughput metabonomic analysis by liquid chromatography-mass spectrometry (LC-MS). Meeting these new demands represents an emerging bioanalytical problem that must be solved if metabolic phenotyping is to be successfully applied to large clinical and epidemiological sample sets. Ultraperformance (UP)LC-MS-based metabolic phenotyping, based on 2.1 mm i.d. LC columns, enables comprehensive metabolic phenotyping but, when employed for the analysis of thousands of samples, results in high solvent usage. The use of UPLC-MS employing 1 mm i.d. columns for metabolic phenotyping rather than the conventional 2.1 mm i.d. methodology shows that the resulting optimized microbore method provided equivalent or superior performance in terms of peak capacity, sensitivity, and robustness. On average, we also observed, when using the microbore scale separation, an increase in response of 2-3 fold over that obtained with the standard 2.1 mm scale method. When applied to the analysis of human urine, the 1 mm scale method showed no decline in performance over the course of 1000 analyses, illustrating that microbore UPLC-MS represents a viable alternative to conventional 2.1 mm i.d. formats for routine large-scale metabolic profiling studies while also resulting in a 75% reduction in solvent usage. The modest increase in sensitivity provided by this methodology also offers the potential to either reduce sample consumption or increase the number of metabolite features detected with confidence due to the increased signal-to-noise ratios obtained. Implementation of this miniaturized UPLC-MS method of metabolic phenotyping results in clear analytical, economic, and environmental benefits for large-scale metabolic profiling studies with similar or improved analytical performance compared to conventional UPLC-MS.
Golemba, Marcelo D.; Culasso, Andrés C. A.; Villamil, Federico G.; Bare, Patricia; Gadano, Adrián; Ridruejo, Ezequiel; Martinez, Alfredo; Di Lello, Federico A.; Campos, Rodolfo H.
2013-01-01
Background The estimated prevalence of HCV infection in Argentina is around 2%. However, higher rates of infection have been described in population studies of small urban and rural communities. The aim of this work was to compare the origin and diversification of HCV-1b in samples from two different epidemiological scenarios: Buenos Aires, a large cosmopolitan city, and O'Brien, a small rural town with a high prevalence of HCV infection. Patients and Methods The E1/E2 and NS5B regions of the viral genome from 83 patients infected with HCV-1b were sequenced. Phylogenetic analysis and Bayesian Coalescent methods were used to study the origin and diversification of HCV-1b in both patient populations. Results Samples from Buenos Aires showed a polyphyletic behavior with a tMRCA around 1887–1900 and a time of spread of infection approximately 60 years ago. In contrast, samples from ÓBrien showed a monophyletic behavior with a tMRCA around 1950–1960 and a time of spread of infection more recent than in Buenos Aires, around 20–30 years ago. Conclusion Phylogenetic and coalescence analysis revealed a different behavior in the epidemiological histories of Buenos Aires and ÓBrien. HCV infection in Buenos Aires shows a polyphyletic behavior and an exponential growth in two phases, whereas that in O'Brien shows a monophyletic cluster and an exponential growth in one single step with a more recent tMRCA. The polyphyletic origin and the probability of encountering susceptible individuals in a large cosmopolitan city like Buenos Aires are in agreement with a longer period of expansion. In contrast, in less populated areas such as O'Brien, the chances of HCV transmission are strongly restricted. Furthermore, the monophyletic character and the most recent time of emergence suggest that different HCV-1b ancestors (variants) that were in expansion in Buenos Aires had the opportunity to colonize and expand in O’Brien. PMID:24386322
Bellin, Alberto; Tonina, Daniele
2007-10-30
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.
NASA Astrophysics Data System (ADS)
Karch, J.; Dudák, J.; Žemlička, J.; Vavřík, D.; Kumpová, I.; Kvaček, J.; Heřmanová, Z.; Šoltés, J.; Viererbl, L.; Morgano, M.; Kaestner, A.; Trtík, P.
2017-12-01
Computed tomography provides 3D information of inner structures of investigated objects. The obtained information is, however, strongly dependent on the used radiation type. It is known that as X-rays interact with electron cloud and neutrons with atomic nucleus, the obtained data often provide different contrast of sample structures. In this work we present a set of comparative radiographic and CT measurements of rare fossil plant samples using X-rays and thermal neutrons. The X-ray measurements were performed using large area photon counting detectors Timepix at IEAP CTU in Prague and Perkin Elmer flat-panel detector at Center of Excellence Telč. The neutron CT measurement was carried out at Paul Scherrer Institute using BOA beam-line. Furthermore, neutron radiography of fossil samples, provided by National Museum, were performed using a large-area Timepix detector with a neutron-sensitive converting 6LiF layer at Research Centre Rez, Czech Republic. The obtained results show different capabilities of both imaging approaches. While X-ray micro-CT provides very high resolution and enables visualization of fine cracks or small cavities in the samples neutron imaging provides high contrast of morphological structures of fossil plant samples, where X-ray imaging provides insufficient contrast.
Brouwer, Lieke; van der Sanden, Sabine M G; Calis, Job C J; Bruning, Andrea H L; Wang, Steven; Wildenbeest, Joanne G; Rebers, Sjoerd P H; Phiri, Kamija S; Westerhuis, Brenda M; van Hensbroek, Michaël Boele; Pajkrt, Dasja; Wolthers, Katja C
2018-05-28
Enteroviruses (EVs) are among the most commonly detected viruses infecting humans worldwide. Although the prevalence of EVs is widely studied, the status of EV prevalence in sub-Saharan Africa remains largely unknown. The objective of our present study was therefore to increase our knowledge on EV circulation in sub-Saharan Africa. We obtained 749 fecal samples from a cross-sectional study conducted on Malawian children aged 6 to 60 months. We tested the samples for the presence of EVs using real time PCR, and typed the positive samples based on partial viral protein 1 (VP1) sequences. A large proportion of the samples was EV positive (89.9%). 12.9% of the typed samples belonged to EV species A (EV-A), 48.6% to species B (EV-B) and 38.5% to species C (EV-C). More than half of the EV-C strains (53%) belonged to subgroup C containing, among others, Poliovirus (PV) 1-3. The serotype most frequently isolated in our study was CVA-13, followed by EV-C99. The strains of CVA-13 showed a vast genetic diversity, possibly representing a new cluster, 'F'. The majority of the EV-C99 strains grouped together as cluster B. In conclusion, this study showed a vast circulation of EVs among Malawian children, with an EV prevalence of 89.9%. Identification of prevalences for species EV-C comparable to our study (38.5%) have only previously been reported in sub-Saharan Africa, and EV-C is rarely found outside of this region. The data found in this study are an important contribution to our current knowledge of EV epidemiology within sub-Saharan Africa.
Zhang, Yanhua; Regmi, Rajesh; Liu, Yi; Lawes, Gavin; Brock, Stephanie L
2014-07-22
Small changes in the synthesis of MnAs nanoparticles lead to materials with distinct behavior. Samples prepared by slow heating to 523 K (type-A) exhibit the characteristic magnetostructural transition from the ferromagnetic hexagonal (α) to the paramagnetic orthorhombic (β) phase of bulk MnAs at Tp = 312 K, whereas those prepared by rapid nucleation at 603 K (type-B) adopt the β structure at room temperature and exhibit anomalous magnetic properties. The behavior of type-B nanoparticles is due to P-incorporation (up to 3%), attributed to reaction of the solvent (trioctylphosphine oxide). P-incorporation results in a decrease in the unit cell volume (∼1%) and shifts Tp below room temperature. Temperature-dependent X-ray diffraction reveals a large region of phase-coexistence, up to 90 K, which may reflect small differences in Tp from particle-to-particle within the nearly monodisperse sample. The large coexistence range coupled to the thermal hysteresis results in process-dependent phase mixtures. As-prepared type-B samples exhibiting the β structure at room temperature convert to a mixture of α and β after the sample has been cooled to 77 K and rewarmed to room temperature. This change is reflected in the magnetic response, which shows an increased moment and a shift in the temperature hysteresis loop after cooling. The proportion of α present at room temperature can also be augmented by application of an external magnetic field. Both doped (type-B) and undoped (type-A) MnAs nanoparticles show significant thermal hysteresis narrowing relative to their bulk phases, suggesting that formation of nanoparticles may be an effective method to reduce thermal losses in magnetic refrigeration applications.
NASA Astrophysics Data System (ADS)
Shang, H.; Chen, L.; Bréon, F. M.; Letu, H.; Li, S.; Wang, Z.; Su, L.
2015-11-01
The principles of cloud droplet size retrieval via Polarization and Directionality of the Earth's Reflectance (POLDER) requires that clouds be horizontally homogeneous. The retrieval is performed by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval and analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-grid-scale variability in droplet effective radius (CDR) can significantly reduce valid retrievals and introduce small biases to the CDR (~ 1.5 μm) and effective variance (EV) estimates. Nevertheless, the sub-grid-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval using limited observations is accurate and is largely free of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, measurements in the primary rainbow region (137-145°) are used to ensure retrievals of large droplet (> 15 μm) and to reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data from June 2008, and the new CDR results are compared with the operational CDRs. The comparison shows that the operational CDRs tend to be underestimated for large droplets because the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Finally, a sub-grid-scale retrieval case demonstrates that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size distribution parameters from POLDER measurements.
Sample Selection for Training Cascade Detectors.
Vállez, Noelia; Deniz, Oscar; Bueno, Gloria
2015-01-01
Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.
Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.
Sztepanacz, Jacqueline L; Blows, Mark W
2017-07-01
The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.
A measure of the signal-to-noise ratio of microarray samples and studies using gene correlations.
Venet, David; Detours, Vincent; Bersini, Hugues
2012-01-01
The quality of gene expression data can vary dramatically from platform to platform, study to study, and sample to sample. As reliable statistical analysis rests on reliable data, determining such quality is of the utmost importance. Quality measures to spot problematic samples exist, but they are platform-specific, and cannot be used to compare studies. As a proxy for quality, we propose a signal-to-noise ratio for microarray data, the "Signal-to-Noise Applied to Gene Expression Experiments", or SNAGEE. SNAGEE is based on the consistency of gene-gene correlations. We applied SNAGEE to a compendium of 80 large datasets on 37 platforms, for a total of 24,380 samples, and assessed the signal-to-noise ratio of studies and samples. This allowed us to discover serious issues with three studies. We show that signal-to-noise ratios of both studies and samples are linked to the statistical significance of the biological results. We showed that SNAGEE is an effective way to measure data quality for most types of gene expression studies, and that it often outperforms existing techniques. Furthermore, SNAGEE is platform-independent and does not require raw data files. The SNAGEE R package is available in BioConductor.
The Impact of Accelerating Faster than Exponential Population Growth on Genetic Variation
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-01-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models’ effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times. PMID:24381333
The impact of accelerating faster than exponential population growth on genetic variation.
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-03-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.
Predicting Hydrologic Function With Aquatic Gene Fragments
NASA Astrophysics Data System (ADS)
Good, S. P.; URycki, D. R.; Crump, B. C.
2018-03-01
Recent advances in microbiology techniques, such as genetic sequencing, allow for rapid and cost-effective collection of large quantities of genetic information carried within water samples. Here we posit that the unique composition of aquatic DNA material within a water sample contains relevant information about hydrologic function at multiple temporal scales. In this study, machine learning was used to develop discharge prediction models trained on the relative abundance of bacterial taxa classified into operational taxonomic units (OTUs) based on 16S rRNA gene sequences from six large arctic rivers. We term this approach "genohydrology," and show that OTU relative abundances can be used to predict river discharge at monthly and longer timescales. Based on a single DNA sample from each river, the average Nash-Sutcliffe efficiency (NSE) for predicted mean monthly discharge values throughout the year was 0.84, while the NSE for predicted discharge values across different return intervals was 0.67. These are considerable improvements over predictions based only on the area-scaled mean specific discharge of five similar rivers, which had average NSE values of 0.64 and -0.32 for seasonal and recurrence interval discharge values, respectively. The genohydrology approach demonstrates that genetic diversity within the aquatic microbiome is a large and underutilized data resource with benefits for prediction of hydrologic function.
Neurons from the adult human dentate nucleus: neural networks in the neuron classification.
Grbatinić, Ivan; Marić, Dušica L; Milošević, Nebojša T
2015-04-07
Topological (central vs. border neuron type) and morphological classification of adult human dentate nucleus neurons according to their quantified histomorphological properties using neural networks on real and virtual neuron samples. In the real sample 53.1% and 14.1% of central and border neurons, respectively, are classified correctly with total of 32.8% of misclassified neurons. The most important result present 62.2% of misclassified neurons in border neurons group which is even greater than number of correctly classified neurons (37.8%) in that group, showing obvious failure of network to classify neurons correctly based on computational parameters used in our study. On the virtual sample 97.3% of misclassified neurons in border neurons group which is much greater than number of correctly classified neurons (2.7%) in that group, again confirms obvious failure of network to classify neurons correctly. Statistical analysis shows that there is no statistically significant difference in between central and border neurons for each measured parameter (p>0.05). Total of 96.74% neurons are morphologically classified correctly by neural networks and each one belongs to one of the four histomorphological types: (a) neurons with small soma and short dendrites, (b) neurons with small soma and long dendrites, (c) neuron with large soma and short dendrites, (d) neurons with large soma and long dendrites. Statistical analysis supports these results (p<0.05). Human dentate nucleus neurons can be classified in four neuron types according to their quantitative histomorphological properties. These neuron types consist of two neuron sets, small and large ones with respect to their perykarions with subtypes differing in dendrite length i.e. neurons with short vs. long dendrites. Besides confirmation of neuron classification on small and large ones, already shown in literature, we found two new subtypes i.e. neurons with small soma and long dendrites and with large soma and short dendrites. These neurons are most probably equally distributed throughout the dentate nucleus as no significant difference in their topological distribution is observed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Measuring salivary analytes from free-ranging monkeys
Higham, James P.; Vitale, Alison; Rivera, Adaris Mas; Ayala, James E.; Maestripieri, Dario
2014-01-01
Studies of large free-ranging mammals have been revolutionized by non-invasive methods for assessing physiology, which usually involve the measurement of fecal or urinary biomarkers. However, such techniques are limited by numerous factors. To expand the range of physiological variables measurable non-invasively from free-ranging primates, we developed techniques for sampling monkey saliva by offering monkeys ropes with oral swabs sewn on the ends. We evaluated different attractants for encouraging individuals to offer samples, and proportions of individuals in different age/sex categories willing to give samples. We tested the saliva samples we obtained in three commercially available assays: cortisol, Salivary Alpha Amylase, and Secretory Immunoglobulin A. We show that habituated free-ranging rhesus macaques will give saliva samples voluntarily without training, with 100% of infants, and over 50% of adults willing to chew on collection devices. Our field methods are robust even for analytes that show poor recovery from cotton, and/or that have concentrations dependent on salivary flow rate. We validated the cortisol and SAA assays for use in rhesus macaques by showing aspects of analytical validation, such as that samples dilute linearly and in parallel to assay standards. We also found that values measured correlated with biologically meaningful characteristics of sampled individuals (age and dominance rank). The SIgA assay tested did not react to samples. Given the wide range of analytes measurable in saliva but not in feces or urine, our methods considerably improve our ability to study physiological aspects of the behavior and ecology of free-ranging primates, and are also potentially adaptable to other mammalian taxa. PMID:20837036
Acoustic Enrichment of Extracellular Vesicles from Biological Fluids.
Ku, Anson; Lim, Hooi Ching; Evander, Mikael; Lilja, Hans; Laurell, Thomas; Scheding, Stefan; Ceder, Yvonne
2018-06-11
Extracellular vesicles (EVs) have emerged as a rich source of biomarkers providing diagnostic and prognostic information in diseases such as cancer. Large-scale investigations into the contents of EVs in clinical cohorts are warranted, but a major obstacle is the lack of a rapid, reproducible, efficient, and low-cost methodology to enrich EVs. Here, we demonstrate the applicability of an automated acoustic-based technique to enrich EVs, termed acoustic trapping. Using this technology, we have successfully enriched EVs from cell culture conditioned media and urine and blood plasma from healthy volunteers. The acoustically trapped samples contained EVs ranging from exosomes to microvesicles in size and contained detectable levels of intravesicular microRNAs. Importantly, this method showed high reproducibility and yielded sufficient quantities of vesicles for downstream analysis. The enrichment could be obtained from a sample volume of 300 μL or less, an equivalent to 30 min of enrichment time, depending on the sensitivity of downstream analysis. Taken together, acoustic trapping provides a rapid, automated, low-volume compatible, and robust method to enrich EVs from biofluids. Thus, it may serve as a novel tool for EV enrichment from large number of samples in a clinical setting with minimum sample preparation.
See, Hong Heng; Hauser, Peter C; Ibrahim, Wan Aini Wan; Sanagi, Mohd Marsin
2010-01-01
Rapid and direct online preconcentration followed by CE with capacitively coupled contactless conductivity detection (CE-C(4)D) is evaluated as a new approach for the determination of glyphosate, glufosinate (GLUF), and aminophosphonic acid (AMPA) in drinking water. Two online preconcentration techniques, namely large volume sample stacking without polarity switching and field-enhanced sample injection, coupled with CE-C(4)D were successfully developed and optimized. Under optimized conditions, LODs in the range of 0.01-0.1 microM (1.7-11.1 microg/L) and sensitivity enhancements of 48- to 53-fold were achieved with the large volume sample stacking-CE-C(4)D method. By performing the field-enhanced sample injection-CE-C(4)D procedure, excellent LODs down to 0.0005-0.02 microM (0.1-2.2 microg/L) as well as sensitivity enhancements of up to 245- to 1002-fold were obtained. Both techniques showed satisfactory reproducibility with RSDs of peak height of better than 10%. The newly established approaches were successfully applied to the analysis of glyphosate, glufosinate, and aminophosphonic acid in spiked tap drinking water.
Nullspace Sampling with Holonomic Constraints Reveals Molecular Mechanisms of Protein Gαs.
Pachov, Dimitar V; van den Bedem, Henry
2015-07-01
Proteins perform their function or interact with partners by exchanging between conformational substates on a wide range of spatiotemporal scales. Structurally characterizing these exchanges is challenging, both experimentally and computationally. Large, diffusional motions are often on timescales that are difficult to access with molecular dynamics simulations, especially for large proteins and their complexes. The low frequency modes of normal mode analysis (NMA) report on molecular fluctuations associated with biological activity. However, NMA is limited to a second order expansion about a minimum of the potential energy function, which limits opportunities to observe diffusional motions. By contrast, kino-geometric conformational sampling (KGS) permits large perturbations while maintaining the exact geometry of explicit conformational constraints, such as hydrogen bonds. Here, we extend KGS and show that a conformational ensemble of the α subunit Gαs of heterotrimeric stimulatory protein Gs exhibits structural features implicated in its activation pathway. Activation of protein Gs by G protein-coupled receptors (GPCRs) is associated with GDP release and large conformational changes of its α-helical domain. Our method reveals a coupled α-helical domain opening motion while, simultaneously, Gαs helix α5 samples an activated conformation. These motions are moderated in the activated state. The motion centers on a dynamic hub near the nucleotide-binding site of Gαs, and radiates to helix α4. We find that comparative NMA-based ensembles underestimate the amplitudes of the motion. Additionally, the ensembles fall short in predicting the accepted direction of the full activation pathway. Taken together, our findings suggest that nullspace sampling with explicit, holonomic constraints yields ensembles that illuminate molecular mechanisms involved in GDP release and protein Gs activation, and further establish conformational coupling between key structural elements of Gαs.
Nullspace Sampling with Holonomic Constraints Reveals Molecular Mechanisms of Protein Gαs
Pachov, Dimitar V.; van den Bedem, Henry
2015-01-01
Proteins perform their function or interact with partners by exchanging between conformational substates on a wide range of spatiotemporal scales. Structurally characterizing these exchanges is challenging, both experimentally and computationally. Large, diffusional motions are often on timescales that are difficult to access with molecular dynamics simulations, especially for large proteins and their complexes. The low frequency modes of normal mode analysis (NMA) report on molecular fluctuations associated with biological activity. However, NMA is limited to a second order expansion about a minimum of the potential energy function, which limits opportunities to observe diffusional motions. By contrast, kino-geometric conformational sampling (KGS) permits large perturbations while maintaining the exact geometry of explicit conformational constraints, such as hydrogen bonds. Here, we extend KGS and show that a conformational ensemble of the α subunit Gαs of heterotrimeric stimulatory protein Gs exhibits structural features implicated in its activation pathway. Activation of protein Gs by G protein-coupled receptors (GPCRs) is associated with GDP release and large conformational changes of its α-helical domain. Our method reveals a coupled α-helical domain opening motion while, simultaneously, Gαs helix α5 samples an activated conformation. These motions are moderated in the activated state. The motion centers on a dynamic hub near the nucleotide-binding site of Gαs, and radiates to helix α4. We find that comparative NMA-based ensembles underestimate the amplitudes of the motion. Additionally, the ensembles fall short in predicting the accepted direction of the full activation pathway. Taken together, our findings suggest that nullspace sampling with explicit, holonomic constraints yields ensembles that illuminate molecular mechanisms involved in GDP release and protein Gs activation, and further establish conformational coupling between key structural elements of Gαs. PMID:26218073
Luykx, Jurjen J.; Bakker, Steven C.; Lentjes, Eef; Boks, Marco P. M.; van Geloven, Nan; Eijkemans, Marinus J. C.; Janson, Esther; Strengman, Eric; de Lepper, Anne M.; Westenberg, Herman; Klopper, Kai E.; Hoorn, Hendrik J.; Gelissen, Harry P. M. M.; Jordan, Julian; Tolenaar, Noortje M.; van Dongen, Eric P. A.; Michel, Bregt; Abramovic, Lucija; Horvath, Steve; Kappen, Teus; Bruins, Peter; Keijzers, Peter; Borgdorff, Paul; Ophoff, Roel A.; Kahn, René S.
2012-01-01
Background Animal studies have revealed seasonal patterns in cerebrospinal fluid (CSF) monoamine (MA) turnover. In humans, no study had systematically assessed seasonal patterns in CSF MA turnover in a large set of healthy adults. Methodology/Principal Findings Standardized amounts of CSF were prospectively collected from 223 healthy individuals undergoing spinal anesthesia for minor surgical procedures. The metabolites of serotonin (5-hydroxyindoleacetic acid, 5-HIAA), dopamine (homovanillic acid, HVA) and norepinephrine (3-methoxy-4-hydroxyphenylglycol, MPHG) were measured using high performance liquid chromatography (HPLC). Concentration measurements by sampling and birth dates were modeled using a non-linear quantile cosine function and locally weighted scatterplot smoothing (LOESS, span = 0.75). The cosine model showed a unimodal season of sampling 5-HIAA zenith in April and a nadir in October (p-value of the amplitude of the cosine = 0.00050), with predicted maximum (PCmax) and minimum (PCmin) concentrations of 173 and 108 nmol/L, respectively, implying a 60% increase from trough to peak. Season of birth showed a unimodal 5-HIAA zenith in May and a nadir in November (p = 0.00339; PCmax = 172 and PCmin = 126). The non-parametric LOESS showed a similar pattern to the cosine in both season of sampling and season of birth models, validating the cosine model. A final model including both sampling and birth months demonstrated that both sampling and birth seasons were independent predictors of 5-HIAA concentrations. Conclusion In subjects without mental illness, 5-HT turnover shows circannual variation by season of sampling as well as season of birth, with peaks in spring and troughs in fall. PMID:22312427
High resolution anatomical and quantitative MRI of the entire human occipital lobe ex vivo at 9.4T.
Sengupta, S; Fritz, F J; Harms, R L; Hildebrand, S; Tse, D H Y; Poser, B A; Goebel, R; Roebroeck, A
2018-03-01
Several magnetic resonance imaging (MRI) contrasts are sensitive to myelin content in gray matter in vivo which has ignited ambitions of MRI-based in vivo cortical histology. Ultra-high field (UHF) MRI, at fields of 7T and beyond, is crucial to provide the resolution and contrast needed to sample contrasts over the depth of the cortex and get closer to layer resolved imaging. Ex vivo MRI of human post mortem samples is an important stepping stone to investigate MRI contrast in the cortex, validate it against histology techniques applied in situ to the same tissue, and investigate the resolutions needed to translate ex vivo findings to in vivo UHF MRI. Here, we investigate key technology to extend such UHF studies to large human brain samples while maintaining high resolution, which allows investigation of the layered architecture of several cortical areas over their entire 3D extent and their complete borders where architecture changes. A 16 channel cylindrical phased array radiofrequency (RF) receive coil was constructed to image a large post mortem occipital lobe sample (~80×80×80mm 3 ) in a wide-bore 9.4T human scanner with the aim of achieving high-resolution anatomical and quantitative MR images. Compared with a human head coil at 9.4T, the maximum Signal-to-Noise ratio (SNR) was increased by a factor of about five in the peripheral cortex. Although the transmit profile with a circularly polarized transmit mode at 9.4T is relatively inhomogeneous over the large sample, this challenge was successfully resolved with parallel transmit using the kT-points method. Using this setup, we achieved 60μm anatomical images for the entire occipital lobe showing increased spatial definition of cortical details compared to lower resolutions. In addition, we were able to achieve sufficient control over SNR, B 0 and B 1 homogeneity and multi-contrast sampling to perform quantitative T 2 * mapping over the same volume at 200μm. Markov Chain Monte Carlo sampling provided maximum posterior estimates of quantitative T 2 * and their uncertainty, allowing delineation of the stria of Gennari over the entire length and width of the calcarine sulcus. We discuss how custom RF receive coil arrays built to specific large post mortem sample sizes can provide a platform for UHF cortical layer-specific quantitative MRI over large fields of view. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Large Stratospheric IDPs: Chemical Compostion and Comparison with Smaller Stratospheric IDPs
NASA Astrophysics Data System (ADS)
Flynn, G. J.; Bajt, S.; Sutton, S. R.; Klock, W.
1995-09-01
Six large stratospheric IDPs, each greater than 35 microns, previously analyzed using the X-Ray Microprobe at the National Synchrotron Light Source showed an average volatile content consistent with CI or CM meteorites [1]. Seven additional large IDPs, ranging from 37x33 to 50x44 microns in size and having chondritic major element abundances, have been analyzed using the same instrument. Each of these 7 IDPs is depleted in Ca compared to CI (Avg. Ca = 0.48xCI), a feature also observed in the first set of 6, suggesting most or all of these IDPs are hydrated. The average trace element content of these 7 large IDPs is similar to the previous set of 6 (see Figure 1), though Mn and Cu are about 70% higher in this set. The average composition of these large IDPs is distinctly different from that of smaller IDPs (generally 10 to 20 microns), which show enrichments of the volatiles Cu, Zn, Ga, Ge, and Se by factors of 1.5 to 3 over CI [2]. This suggests large IDPs which are strong enough to resist fragmentation on collection are chemically different from typical smaller IDPs. This may reflect a difference in the source(s) being sampled by the two types of IDPs. A subgroup of the smaller IDPs (9 of 51 particles) have a composition similar to CI meteorites and these large IDPs [2]. Bromine is enriched in most of these large IDPs. Two Br-rich IDPs (Br >300 ppm) and one Br-poor IDP (Br ~5 ppm) were each analyzed twice. The two Br-rich IDPs showed about a factor of two Br loss between the first and second analyses, presumably due to sample heating during the first analysis. This suggests some of the Br is very weakly bound in these Br-rich IDPs, a possible signature of Br surface contamination. However, the Br contents measured in the second analyses were still ~50xCI. No loss of Cu, Zn, Ga, Ge or Se was detected in these IDPs, suggesting these elements are in more retentive sites. The Br-poor IDP (Br ~1.5xCI) showed no Br loss in the second analysis. Only one of these IDPs, L2008G10, showed a large Zn depletion (Zn/Fe <0.01xCI). This was accompanied by low contents of Ga, Ge and Br (see Figure 1). This pattern of Zn, Ge, Br and Ga depletions was previously seen in smaller IDPs which were severely heated, presumably on atmospheric entry [2]. Sulfur and K are also low in L2008G10, suggesting these elements are also lost during heating, but the Se content is 0.8xCI. A second particle, L2009C8, has a Zn/Fe=0.26xCI, possibly indicating less severe heating. The low fraction of severely heated IDPs, only one in this set of 7 and none in the set of 6 [1] suggests a very low atmospheric entry velocity for these large IDPs [3]. References: [1] Flynn G. J. et al. (1995) LPS XXVI, 407-408. [2] Flynn G. J. et al. (1993) LPS XXIV, 495-496. [3] Flynn G. J., this volume. Figure 1: Average Fe and CI normalized element abundances in 7 large IDPs, 6 different large IDPs [1], 51 smaller IDPs [2], and the single low-Zn IDP, L2008G10, included in the set of 7 large IDPs.
Modelling the light-scattering properties of a planetary-regolith analog sample
NASA Astrophysics Data System (ADS)
Vaisanen, T.; Markkanen, J.; Hadamcik, E.; Levasseur-Regourd, A. C.; Lasue, J.; Blum, J.; Penttila, A.; Muinonen, K.
2017-12-01
Solving the scattering properties of asteroid surfaces can be made cheaper, faster, and more accurate with reliable physics-based electromagnetic scattering programs for large and dense random media. Existing exact methods fail to produce solutions for such large systems and it is essential to develop approximate methods. Radiative transfer (RT) is an approximate method which works for sparse random media such as atmospheres fails when applied to dense media. In order to make the method applicable to dense media, we have developed a radiative-transfer coherent-backscattering method (RT-CB) with incoherent interactions. To show the current progress with the RT-CB, we have modeled a planetary-regolith analog sample. The analog sample is a low-density agglomerate produced by random ballistic deposition of almost equisized silicate spheres studied using the PROGRA2-surf experiment. The scattering properties were then computed with the RT-CB assuming that the silicate spheres were equisized and that there were a Gaussian particle size distribution. The results were then compared to the measured data and the intensity plot is shown below. The phase functions are normalized to unity at the 40-deg phase angle. The tentative intensity modeling shows good match with the measured data, whereas the polarization modeling shows discrepancies. In summary, the current RT-CB modeling is promising, but more work needs to be carried out, in particular, for modeling the polarization. Acknowledgments. Research supported by European Research Council with Advanced Grant No. 320773 SAEMPL, Scattering and Absorption of ElectroMagnetic waves in ParticuLate media. Computational resources provided by CSC - IT Centre for Science Ltd, Finland.
Dupont, A Ö C; Griffiths, R I; Bell, T; Bass, D
2016-06-01
A recent large-scale assessment of bacterial communities across a range of UK soil types showed that bacterial community structure was strongly determined by soil pH. We analysed a data set of eukaryotic 454 sequencing 18S rDNA from the surveyed samples and showed significant differences in eukaryotic assemblages according to pH class, mostly between low pH and higher pH soils. Soil eukaryote communities (per sample) differed most at the taxonomic rank approximating to order level. Taxonomies assigned with the Protist Ribosomal Reference and the Silva 119 databases were taxonomically inconsistent, mostly due to differing 18S annotations, although general structure and composition according to pH were coherent. A relatively small number of lineages, mostly putative parasitic protists and fungi, drive most differences between pH classes, with weaker contributions from bacterivores and autotrophs. Overall, soil parasites included a large diversity of alveolates, in particular apicomplexans. Phylogenetic analysis of alveolate lineages demonstrates a large diversity of unknown gregarines, novel perkinsids, coccidians, colpodellids and uncharacterized alveolates. Other novel and/or divergent lineages were revealed across the eukaryote tree of life. Our study provides an in-depth taxonomic evaluation of micro-eukaryotic diversity, and reveals novel lineages and insights into their relationships with environmental variables across soil gradients. © 2016 Society for Applied Microbiology and John Wiley & Sons Ltd.
Individuals with Autistic-Like Traits Show Reduced Lateralization on a Greyscales Task
ERIC Educational Resources Information Center
English, Michael C. W.; Maybery, Murray T.; Visser, Troy A. W.
2015-01-01
Individuals with autism spectrum conditions attend less to the left side of centrally presented face stimuli compared to neurotypical individuals, suggesting a reduction in right hemisphere activation. We examined whether a similar bias exists for non-facial stimuli in a large sample of neurotypical adults rated above- or below-average on the…
Chemistry Students' Assessment of Their Teachers' Effectiveness in Secondary Schools in Benue State
ERIC Educational Resources Information Center
Aduloju, M. O.; Obinne, A. D. E.
2015-01-01
This study examined the assessment of chemistry teachers' effectiveness by chemistry students. A survey research design was used. Two hundred students were sampled for the study from Benue State. The result showed that students agreed that their teachers cover a large part of the syllabus before the examination. Findings also revealed that there…
Discussing the Flynn Effect: From Causes and Interpretation to Implications
ERIC Educational Resources Information Center
Kanaya, Tomoe
2016-01-01
Clark, Lawlor-Savage, and Goghari (this issue) point out that evidence of IQ rises had been documented decades before it was named the Flynn effect. These previous studies, however, were conducted sporadically and in isolated samples. Flynn (1984, 1987) examined them in a large-scale manner and was able to show their systematic and global nature.…
Size variation in Middle Pleistocene humans.
Arsuaga, J L; Carretero, J M; Lorenzo, C; Gracia, A; Martínez, I; Bermúdez de Castro, J M; Carbonell, E
1997-08-22
It has been suggested that European Middle Pleistocene humans, Neandertals, and prehistoric modern humans had a greater sexual dimorphism than modern humans. Analysis of body size variation and cranial capacity variation in the large sample from the Sima de los Huesos site in Spain showed instead that the sexual dimorphism is comparable in Middle Pleistocene and modern populations.
Standards and Assessment: Coherence from the Teacher's Perspective
ERIC Educational Resources Information Center
Bonner, Sarah M.; Torres Rivera, Camila; Chen, Peggy P.
2018-01-01
We sought to understand how teachers' perspectives on standards-based instructional practices, classroom assessment, and external testing do or do not show coherence and alignment. Based on survey methods (n = 155) and interviews with a sample of secondary school teachers (n = 9) in a large urban district in the USA, we explored general trends and…
State Profiles: Financing Public Higher Education. 1978 to 1998 Trend Data.
ERIC Educational Resources Information Center
Halstead, Kent
This report presents two large tables showing trends in the financing of public higher education since 1977-78. Introductory information explains how to use the tables, the data-time relationship (whether fiscal year, academic year, or calendar year), and includes a sample chart constructed from one state's data. The raw data used for these…
NASA Astrophysics Data System (ADS)
Mateeva, Tsvetomila; Kusznir, Nick; Wolff, George; Wheeler, John; Manatschal, Gianreto
2015-04-01
Evidence from ocean ridge drilling and dredging and from the exhumed Tethyan continental margin in the Alps demonstrates that mantle serpentinization occurs at slow-spreading ocean ridges and magma-poor rifted continental margins. Observations at white smokers suggest that methane produced by serpentinization can support methanotrophic bio-systems which use methane as their only source of carbon. An important question is whether such biosystems are more generally pervasive in their association with serpentinized mantle in the subsurface. The answer to this question has important global implications for the importance of the hidden sub-surface bio-systems, the fate of methane and the carbon cycle. We examine whether serpentinized exhumed mantle at magma-poor rifted continental margins shows evidence for methanotrophy. Fieldwork sampling of km scale exposure of orogenically exhumed serpentinized mantle in the eastern Swiss Alps allows 3D mantle sampling not possible at ocean ridges and has the potential to answer the question regarding localized versus pervasive sub-surface methanotrophic biosystems. The Totalp massif in the eastern Swiss Alps has been chosen for an initial study to investigate the presence or absence of methanotrophic biosystem within serpentinized exhumed mantle in the Tethyan OCT. Totalp has little Alpine deformation and its metamorphism is no more than prehnite-pumpellyite grade. Hands specimens and cores have been taken from the Totalp area in order to sample serpentinization and its lithological diversity in the search for presence or absence of biomarkers. Thin sections analysis reveals multiple serpentinization events. XRD analysis shows complete serpentinization of the olivines and orthopyroxenes. The samples for bio-geochemical analysis were cut and ground to powder, processed by soxhlet extraction and then analysed by GC and GCMS in order to determine the full range of biomarkers. Total carbon and total organic carbon was also determined for the samples. Samples collected from the Totalp area show evidence of organic hydrocarbon in the form of alkanes. The majority of the samples contain n-alkanes in the range C20 - C32. Some samples contain isoprenoids in different concentrations dependent on their lithology, for example pristane and phytane are found in Totalp's sediments. The organic molecular distribution is consistent with the temperature history of the basin. Totalp samples are characterized by TC contents of 0.03% to 12.90% and TOC contents of 0.10% to 1.90%. This large range of values correlates with the large lithological diversity of this area. These first results from Totalp showing evidence for preserved organic matter and biosystems in the serpentinized mantle of the ancient Tethyan OCT are encouraging. Much more work is required to understand whether the organic matter is generated from methane-driven biosystems, and if so whether the methane originated from an organic or inorganic source?
Predictors of Prosocial Behavior among Chinese High School Students in Hong Kong
Siu, Andrew M. H.; Shek, Daniel T. L.; Lai, Frank H. Y.
2012-01-01
This study examined the correlates and predictors of prosocial behavior among Chinese adolescents in Hong Kong. A sample of 518 high school students responded to a questionnaire containing measures of antisocial and prosocial behavior, prosocial norms, pragmatic values, moral reasoning, and empathy. Preliminary analyses showed that there were gender differences in some of the measures. While correlation analyses showed that parental education, prosocial norms, pragmatic values, moral reasoning, and empathy were related to prosocial behavior, regression analyses showed that prosocial norms, pragmatic values, and empathy dimensions (personal distress and empathy) were key predictors of it. The findings are largely consistent with theoretical predictions and previous research findings, other than the negative relationship between personal distress and prosocial behavior. The study also underscores the importance of values and norms in predicting prosocial behavior, which has been largely neglected in previous studies. PMID:22919326
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kou, R. H.; Gao, J.; Wang, G.
2016-02-01
The crystal structure of the CoMnSi compound during zero-field cooling and field cooling from room temperature down to 200 K was studied using the synchrotron radiation X-ray diffraction technique. The results show that the lattice parameters and thermal expansion behavior of the sample are changed by the applied magnetic fields. The lattice contracts along the a axis, but expands along the b and c axes. Due to enlarged and anisotropic changes under a magnetic field of 6 T, the lattice shows an invar-like behavior along all three axes. Critical interatomic distances and bond angles also show large changes under themore » influence of such a high magnetic field. These magnetic field-induced changes of the lattice are discussed with respect to their contributions to the large magnetocaloric effect of the CoMnSi compound.« less
Dynamics of acoustically levitated disk samples.
Xie, W J; Wei, B
2004-10-01
The acoustic levitation force on disk samples and the dynamics of large water drops in a planar standing wave are studied by solving the acoustic scattering problem through incorporating the boundary element method. The dependence of levitation force amplitude on the equivalent radius R of disks deviates seriously from the R3 law predicted by King's theory, and a larger force can be obtained for thin disks. When the disk aspect ratio gamma is larger than a critical value gamma(*) ( approximately 1.9 ) and the disk radius a is smaller than the critical value a(*) (gamma) , the levitation force per unit volume of the sample will increase with the enlargement of the disk. The acoustic levitation force on thin-disk samples ( gamma= gamma(*) ) can be formulated by the shape factor f(gamma,a) when a= a(*) (gamma) . It is found experimentally that a necessary condition of the acoustic field for stable levitation of a large water drop is to adjust the reflector-emitter interval H slightly above the resonant interval H(n) . The simulation shows that the drop is flattened and the central parts of its top and bottom surface become concave with the increase of sound pressure level, which agrees with the experimental observation. The main frequencies of the shape oscillation under different sound pressures are slightly larger than the Rayleigh frequency because of the large shape deformation. The simulated translational frequencies of the vertical vibration under normal gravity condition agree with the theoretical analysis.
Dynamics of acoustically levitated disk samples
NASA Astrophysics Data System (ADS)
Xie, W. J.; Wei, B.
2004-10-01
The acoustic levitation force on disk samples and the dynamics of large water drops in a planar standing wave are studied by solving the acoustic scattering problem through incorporating the boundary element method. The dependence of levitation force amplitude on the equivalent radius R of disks deviates seriously from the R3 law predicted by King’s theory, and a larger force can be obtained for thin disks. When the disk aspect ratio γ is larger than a critical value γ*(≈1.9) and the disk radius a is smaller than the critical value a*(γ) , the levitation force per unit volume of the sample will increase with the enlargement of the disk. The acoustic levitation force on thin-disk samples (γ⩽γ*) can be formulated by the shape factor f(γ,a) when a⩽a*(γ) . It is found experimentally that a necessary condition of the acoustic field for stable levitation of a large water drop is to adjust the reflector-emitter interval H slightly above the resonant interval Hn . The simulation shows that the drop is flattened and the central parts of its top and bottom surface become concave with the increase of sound pressure level, which agrees with the experimental observation. The main frequencies of the shape oscillation under different sound pressures are slightly larger than the Rayleigh frequency because of the large shape deformation. The simulated translational frequencies of the vertical vibration under normal gravity condition agree with the theoretical analysis.
Screening for Dissolved Methane in Groundwater Across Texas Shale Plays
NASA Astrophysics Data System (ADS)
Nicot, J. P.; Mickler, P. J.; Hildenbrand, Z.; Larson, T.; Darvari, R.; Uhlman, K.; Smyth, R. C.; Scanlon, B. R.
2014-12-01
There is considerable interest in methane concentrations in groundwater, particularly as they relate to hydraulic fracturing in shale plays. Recent studies of aquifers in the footprint of several gas plays across the US have shown that (1) dissolved thermogenic methane may or may not be present in the shallow groundwater and (2) shallow thermogenic methane may be naturally occurring and emplaced through mostly vertical migration over geologic time and not necessarily a consequence of recent unconventional gas production. We are currently conducting a large sampling campaign across the state of Texas to characterize shallow methane in fresh-water aquifers overlying shale plays and other tight formations. We collected a total of ~800 water samples, ~500 in the Barnett, ~150 in the Eagle Ford, ~80 in the Haynesville shale plays as well as ~50 in the Delaware Basin of West Texas. Preliminary analytical results suggest that dissolved methane is not widespread in shallow groundwater and that, when present at concentrations exceeding 10 mg/L, it is often of thermogenic origin according to the isotopic signature and to the presence of other light hydrocarbons. The Barnett Shale contains a large methane hotspot (~ 2 miles wide) along the Hood-Parker county line which is likely of natural origin whereas the Eagle Ford and Haynesville shales, neglecting microbial methane, show more distributed methane occurrences. Samples from the Delaware Basin show no methane except close to blowouts.
Trindade, Mirta; Nording, Malin; Nichkova, Mikaela; Spinnel, Erik; Haglund, Peter; Last, Michael S.; Gee, Shirley; Hammock, Bruce; Last, Jerold A.; González-Sapienza, Gualberto; Brena, Beatriz M.
2010-01-01
Uncontrolled combustion due to garbage recycling is a widespread activity among slum dwellers in distressed economy countries and has been indicated as a major source of dioxin contamination. However, because of the high cost and complexity of gas chromatography/high-resolution mass spectrometry (GC-HRMS) analysis, the magnitude of the problem remains largely unknown. The present study describes a first approach toward the use of a dioxin antibody-based enzyme-linked immunosorbent assay (ELISA) as the basis for a sustainable, simple, and low-cost monitoring program to assess the toxicological impact of uncontrolled combustion in slums. A panel of 16 samples was analyzed by GC-HRMS and ELISA on split extracts. Close to 20% of the analyzed samples showed dioxin concentrations up to almost twice the guidance level for residential soil in several countries, pointing out the need for performing a large-scale monitoring program. Despite the potential for variations in dioxin congener distribution due to the mixed nature of the incinerated material, there was a good correlation between the toxic equivalents as determined by GC-HRMS and ELISA. Furthermore, an interlaboratory ELISA validation showed that the capacity to perform the dioxin ELISA was successfully transferred between laboratories. It was concluded that the ELISA method performed very well as a screening tool to prioritize samples for instrumental analysis, which allows cutting down costs significantly. PMID:18522475
VizieR Online Data Catalog: Face-on disk galaxies photometry. I. (de Jong+, 1994)
NASA Astrophysics Data System (ADS)
de Jong, R. S.; van der Kruit, P. C.
1995-07-01
We present accurate surface photometry in the B, V, R, I, H and K passbands of 86 spiral galaxies. The galaxies in this statistically complete sample of undisturbed spirals were selected from the UGC to have minimum diameters of 2' and minor over major axis ratios larger than 0.625. This sample has been selected in such a way that it can be used to represent a volume limited sample. The observation and reduction techniques are described in detail, especially the not often used driftscan technique for CCDs and the relatively new techniques using near-infrared (near-IR) arrays. For each galaxy we present radial profiles of surface brightness. Using these profiles we calculated the integrated magnitudes of the galaxies in the different passbands. We performed internal and external consistency checks for the magnitudes as well as the luminosity profiles. The internal consistency is well within the estimated errors. Comparisons with other authors indicate that measurements from photographic plates can show large deviations in the zero-point magnitude. Our surface brightness profiles agree within the errors with other CCD measurements. The comparison of integrated magnitudes shows a large scatter, but a consistent zero-point. These measurements will be used in a series of forthcoming papers to discuss central surface brightnesses, scalelengths, colors and color gradients of disks of spiral galaxies. (9 data files).
High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.
Andras, Peter
2018-02-01
Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.
Jones, Jana; Mirzaei, Mehdi; Ravishankar, Prathiba; Xavier, Dylan; Lim, Do Seon; Shin, Dong Hoon; Bianucci, Raffaella; Haynes, Paul A
2016-10-28
We performed proteomics analysis on four skin and one muscle tissue samples taken from three ancient Egyptian mummies of the first intermediate period, approximately 4200 years old. The mummies were first dated by radiocarbon dating of the accompany-\\break ing textiles, and morphologically examined by scanning electron microscopy of additional skin samples. Proteins were extracted, separated on SDS-PAGE (sodium dodecyl sulfate polyacrylamide gel electrophoresis) gels, and in-gel digested with trypsin. The resulting peptides were analysed using nanoflow high-performance liquid chromatography-mass spectrometry. We identified a total of 230 unique proteins from the five samples, which consisted of 132 unique protein identifications. We found a large number of collagens, which was confirmed by our microscopy data, and is in agreement with previous studies showing that collagens are very long-lived. As expected, we also found a large number of keratins. We identified numerous proteins that provide evidence of activation of the innate immunity system in two of the mummies, one of which also contained proteins indicating severe tissue inflammation, possibly indicative of an infection that we can speculate may have been related to the cause of death.This article is part of the themed issue 'Quantitative mass spectrometry'. © 2016 The Author(s).
Jones, Jana; Mirzaei, Mehdi; Ravishankar, Prathiba; Xavier, Dylan; Lim, Do Seon; Shin, Dong Hoon; Bianucci, Raffaella
2016-01-01
We performed proteomics analysis on four skin and one muscle tissue samples taken from three ancient Egyptian mummies of the first intermediate period, approximately 4200 years old. The mummies were first dated by radiocarbon dating of the accompany-\\break ing textiles, and morphologically examined by scanning electron microscopy of additional skin samples. Proteins were extracted, separated on SDS–PAGE (sodium dodecyl sulfate polyacrylamide gel electrophoresis) gels, and in-gel digested with trypsin. The resulting peptides were analysed using nanoflow high-performance liquid chromatography–mass spectrometry. We identified a total of 230 unique proteins from the five samples, which consisted of 132 unique protein identifications. We found a large number of collagens, which was confirmed by our microscopy data, and is in agreement with previous studies showing that collagens are very long-lived. As expected, we also found a large number of keratins. We identified numerous proteins that provide evidence of activation of the innate immunity system in two of the mummies, one of which also contained proteins indicating severe tissue inflammation, possibly indicative of an infection that we can speculate may have been related to the cause of death. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644972
Impact of hindcast length on estimates of seasonal climate predictability.
Shi, W; Schaller, N; MacLeod, D; Palmer, T N; Weisheimer, A
2015-03-16
It has recently been argued that single-model seasonal forecast ensembles are overdispersive, implying that the real world is more predictable than indicated by estimates of so-called perfect model predictability, particularly over the North Atlantic. However, such estimates are based on relatively short forecast data sets comprising just 20 years of seasonal predictions. Here we study longer 40 year seasonal forecast data sets from multimodel seasonal forecast ensemble projects and show that sampling uncertainty due to the length of the hindcast periods is large. The skill of forecasting the North Atlantic Oscillation during winter varies within the 40 year data sets with high levels of skill found for some subperiods. It is demonstrated that while 20 year estimates of seasonal reliability can show evidence of overdispersive behavior, the 40 year estimates are more stable and show no evidence of overdispersion. Instead, the predominant feature on these longer time scales is underdispersion, particularly in the tropics. Predictions can appear overdispersive due to hindcast length sampling errorLonger hindcasts are more robust and underdispersive, especially in the tropicsTwenty hindcasts are an inadequate sample size to assess seasonal forecast skill.
Regional HLA Differences in Poland and Their Effect on Stem Cell Donor Registry Planning
Schmidt, Alexander H.; Solloch, Ute V.; Pingel, Julia; Sauter, Jürgen; Böhme, Irina; Cereb, Nezih; Dubicka, Kinga; Schumacher, Stephan; Wachowiak, Jacek; Ehninger, Gerhard
2013-01-01
Regional HLA frequency differences are of potential relevance for the optimization of stem cell donor recruitment. We analyzed a very large sample (n = 123,749) of registered Polish stem cell donors. Donor figures by 1-digit postal code regions ranged from n = 5,243 (region 9) to n = 19,661 (region 8). Simulations based on region-specific haplotype frequencies showed that donor recruitment in regions 0, 2, 3 and 4 (mainly located in the south-eastern part of Poland) resulted in an above-average increase of matching probabilities for Polish patients. Regions 1, 7, 8, 9 (mainly located in the northern part of Poland) showed an opposite behavior. However, HLA frequency differences between regions were generally small. A strong indication for regionally focused donor recruitment efforts can, therefore, not be derived from our analyses. Results of haplotype frequency estimations showed sample size effects even for sizes between n≈5,000 and n≈20,000. This observation deserves further attention as most published haplotype frequency estimations are based on much smaller samples. PMID:24069237
Crock, J.G.; Severson, R.C.; Gough, L.P.
1992-01-01
Recent investigations on the Kenai Peninsula had two major objectives: (1) to establish elemental baseline concentrations ranges for native vegetation and soils; and, (2) to determine the sampling density required for preparing stable regional geochemical maps for various elements in native plants and soils. These objectives were accomplished using an unbalanced, nested analysis-of-variance (ANOVA) barbell sampling design. Hylocomium splendens (Hedw.) BSG (feather moss, whole plant), Picea glauca (Moench) Voss (white spruce, twigs and needles), and soil horizons (02 and C) were collected and analyzed for major and trace total element concentrations. Using geometric means and geometric deviations, expected baseline ranges for elements were calculated. Results of the ANOVA show that intensive soil or plant sampling is needed to reliably map the geochemistry of the area, due to large local variability. For example, producing reliable element maps of feather moss using a 50 km cell (at 95% probability) would require sampling densities of from 4 samples per cell for Al, Co, Fe, La, Li, and V, to more than 15 samples per cell for Cu, Pb, Se, and Zn.Recent investigations on the Kenai Peninsula had two major objectives: (1) to establish elemental baseline concentrations ranges for native vegetation and soils; and, (2) to determine the sampling density required for preparing stable regional geochemical maps for various elements in native plants and soils. These objectives were accomplished using an unbalanced, nested analysis-of-variance (ANOVA) barbell sampling design. Hylocomium splendens (Hedw.) BSG (feather moss, whole plant), Picea glauca (Moench) Voss (white spruce, twigs and needles), and soil horizons (02 and C) were collected and analyzed for major and trace total element concentrations. Using geometric means and geometric deviations, expected baseline ranges for elements were calculated. Results of the ANOVA show that intensive soil or plant sampling is needed to reliably map the geochemistry of the area, due to large local variability. For example, producing reliable element maps of feather moss using a 50 km cell (at 95% probability) would require sampling densities of from 4 samples per cell Al, Co, Fe, La, Li, and V, to more than 15 samples per cell for Cu, Pb, Se, and Zn.
Robust regression for large-scale neuroimaging studies.
Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand
2015-05-01
Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.
Escott-Price, Valentina; Ghodsi, Mansoureh; Schmidt, Karl Michael
2014-04-01
We evaluate the effect of genotyping errors on the type-I error of a general association test based on genotypes, showing that, in the presence of errors in the case and control samples, the test statistic asymptotically follows a scaled non-central $\\chi ^2$ distribution. We give explicit formulae for the scaling factor and non-centrality parameter for the symmetric allele-based genotyping error model and for additive and recessive disease models. They show how genotyping errors can lead to a significantly higher false-positive rate, growing with sample size, compared with the nominal significance levels. The strength of this effect depends very strongly on the population distribution of the genotype, with a pronounced effect in the case of rare alleles, and a great robustness against error in the case of large minor allele frequency. We also show how these results can be used to correct $p$-values.
Using DNA to track the origin of the largest ivory seizure since the 1989 trade ban.
Wasser, Samuel K; Mailand, Celia; Booth, Rebecca; Mutayoba, Benezeth; Kisamo, Emily; Clark, Bill; Stephens, Matthew
2007-03-06
The illegal ivory trade recently intensified to the highest levels ever reported. Policing this trafficking has been hampered by the inability to reliably determine geographic origin of contraband ivory. Ivory can be smuggled across multiple international borders and along numerous trade routes, making poaching hotspots and potential trade routes difficult to identify. This fluidity also makes it difficult to refute a country's denial of poaching problems. We extend an innovative DNA assignment method to determine the geographic origin(s) of large elephant ivory seizures. A Voronoi tessellation method is used that utilizes genetic similarities across tusks to simultaneously infer the origin of multiple samples that could have one or more common origin(s). We show that this joint analysis performs better than sample-by-sample methods in assigning sample clusters of known origin. The joint method is then used to infer the geographic origin of the largest ivory seizure since the 1989 ivory trade ban. Wildlife authorities initially suspected that this ivory came from multiple locations across forest and savanna Africa. However, we show that the ivory was entirely from savanna elephants, most probably originating from a narrow east-to-west band of southern Africa, centered on Zambia. These findings enabled law enforcement to focus their investigation to a smaller area and fewer trade routes and led to changes within the Zambian government to improve antipoaching efforts. Such outcomes demonstrate the potential of genetic analyses to help combat the expanding wildlife trade by identifying origin(s) of large seizures of contraband ivory. Broader applications to wildlife trade are discussed.
Using DNA to track the origin of the largest ivory seizure since the 1989 trade ban
Wasser, Samuel K.; Mailand, Celia; Booth, Rebecca; Mutayoba, Benezeth; Kisamo, Emily; Clark, Bill; Stephens, Matthew
2007-01-01
The illegal ivory trade recently intensified to the highest levels ever reported. Policing this trafficking has been hampered by the inability to reliably determine geographic origin of contraband ivory. Ivory can be smuggled across multiple international borders and along numerous trade routes, making poaching hotspots and potential trade routes difficult to identify. This fluidity also makes it difficult to refute a country's denial of poaching problems. We extend an innovative DNA assignment method to determine the geographic origin(s) of large elephant ivory seizures. A Voronoi tessellation method is used that utilizes genetic similarities across tusks to simultaneously infer the origin of multiple samples that could have one or more common origin(s). We show that this joint analysis performs better than sample-by-sample methods in assigning sample clusters of known origin. The joint method is then used to infer the geographic origin of the largest ivory seizure since the 1989 ivory trade ban. Wildlife authorities initially suspected that this ivory came from multiple locations across forest and savanna Africa. However, we show that the ivory was entirely from savanna elephants, most probably originating from a narrow east-to-west band of southern Africa, centered on Zambia. These findings enabled law enforcement to focus their investigation to a smaller area and fewer trade routes and led to changes within the Zambian government to improve antipoaching efforts. Such outcomes demonstrate the potential of genetic analyses to help combat the expanding wildlife trade by identifying origin(s) of large seizures of contraband ivory. Broader applications to wildlife trade are discussed. PMID:17360505
Hydrometallurgical Recovery of Metals from Large Printed Circuit Board Pieces.
Jadhav, U; Hocheng, H
2015-09-29
The recovery of precious metals from waste printed circuit boards (PCBs) is an effective recycling process. This paper presents a promising hydrometallurgical process to recover precious metals from waste PCBs. To simplify the metal leaching process, large pieces of PCBs were used instead of a pulverized sample. The chemical coating present on the PCBs was removed by sodium hydroxide (NaOH) treatment prior to the hydrometallurgical treatment. Among the leaching reagents examined, hydrochloric acid (HCl) showed great potential for the recovery of metals. The HCl-mediated leaching of waste PCBs was investigated over a range of conditions. Increasing the acid concentration decreased the time required for complete metal recovery. The shaking speed showed a pronounced positive effect on metal recovery, but the temperature showed an insignificant effect. The results showed that 1 M HCl recovered all of the metals from 4 cm × 4 cm PCBs at room temperature and 150 rpm shaking speed in 22 h.
Hydrometallurgical Recovery of Metals from Large Printed Circuit Board Pieces
Jadhav, U.; Hocheng, H.
2015-01-01
The recovery of precious metals from waste printed circuit boards (PCBs) is an effective recycling process. This paper presents a promising hydrometallurgical process to recover precious metals from waste PCBs. To simplify the metal leaching process, large pieces of PCBs were used instead of a pulverized sample. The chemical coating present on the PCBs was removed by sodium hydroxide (NaOH) treatment prior to the hydrometallurgical treatment. Among the leaching reagents examined, hydrochloric acid (HCl) showed great potential for the recovery of metals. The HCl-mediated leaching of waste PCBs was investigated over a range of conditions. Increasing the acid concentration decreased the time required for complete metal recovery. The shaking speed showed a pronounced positive effect on metal recovery, but the temperature showed an insignificant effect. The results showed that 1 M HCl recovered all of the metals from 4 cm × 4 cm PCBs at room temperature and 150 rpm shaking speed in 22 h. PMID:26415827
The Canadian Geo-location Endeavour Using Isotopes and Trace Elements in Hair
NASA Astrophysics Data System (ADS)
Chartrand, Michelle M. G.; St-Jean, Gilles; Dalpe, Claude; Wojtyk, James
2010-05-01
The Canadian human hair provenance project has two main objectives: 1) to build a Canadian database of isotopes and trace elements from tap water and hair samples, and 2) to assess the extent of temporal effects on these samples. To address objective 1, a cross-Canada sampling campaign has been started to collect hair and tap water samples. In the past two years, our group has collected samples from the eastern part of Canada (Newfoundland, Nova Scotia, New Brunswick, Prince Edward Island, Quebec and Ontario). Water samples are divided into three groups - groundwater, surface water and bottled water. The GIS maps show the isotopic distribution of the tap water sources varies with latitude. Hair is analyzed for carbon (C), nitrogen (N) and hydrogen (H) isotopes. The C and N results show that in general, Canadians eat a typical diet showing a small isotopic variation. However, some cases will be presented which may explain why some people have C and N values outlying the collected sample average. In terms of H isotopes in human hair, GIS maps illustrate the distribution of this isotope in the eastern provinces of Canada. In some cases, a large variation in H was observed for the same locality with no significant difference in human activities and/or consumption. However, based on hair collected from across Canada from previous years, H isotopes in hair show a correlation to water collected from the same locality. To address objective 2, hair and tap water samples were collected at 4 month intervals (to represent different seasons in Canada) from several volunteers residing in two cities located in the province of Ontario (i.e. Sudbury and Ottawa) and one city from the province of Quebec (i.e. Montreal). For all isotopes measured, there was little variation observed over the course of the year in any individual from those small to medium-size cities. On-going sampling efforts will address if any variation may occur on a yearly basis.
Active Search on Carcasses versus Pitfall Traps: a Comparison of Sampling Methods.
Zanetti, N I; Camina, R; Visciarelli, E C; Centeno, N D
2016-04-01
The study of insect succession in cadavers and the classification of arthropods have mostly been done by placing a carcass in a cage, protected from vertebrate scavengers, which is then visited periodically. An alternative is to use specific traps. Few studies on carrion ecology and forensic entomology involving the carcasses of large vertebrates have employed pitfall traps. The aims of this study were to compare both sampling methods (active search on a carcass and pitfall trapping) for each coleopteran family, and to establish whether there is a discrepancy (underestimation and/or overestimation) in the presence of each family by either method. A great discrepancy was found for almost all families with some of them being more abundant in samples obtained through active search on carcasses and others in samples from traps, whereas two families did not show any bias towards a given sampling method. The fact that families may be underestimated or overestimated by the type of sampling technique highlights the importance of combining both methods, active search on carcasses and pitfall traps, in order to obtain more complete information on decomposition, carrion habitat and cadaveric families or species. Furthermore, a hypothesis advanced on the reasons for the underestimation by either sampling method showing biases towards certain families. Information about the sampling techniques indicating which would be more appropriate to detect or find a particular family is provided.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Galaxies driven only by secular evolution?
NASA Astrophysics Data System (ADS)
Verdes-Montenegro, Lourdes
2015-03-01
The AMIGA project (Analysis of the interstellar Medium of Isolated GAlaxies, http://amiga.iaa.es) has identified a significant sample of very isolated (T cc (nearest-neighbor) ~2-3Gyr) galaxies in the local Universe and revealed that they have different properties than galaxies in richer environments. Our analysis of a multiwavelength database includes quantification of degree of isolation, morphologies, as well as FIR and radio line/continuum properties. Properties usually regarded as susceptible to interaction enhancement show lower averages in AMIGA-lower than any galaxy sample yet identified. We find lower MIR/FIR measures (Lisenfeld et al. 2007), low levels of radio continuum emission (Leon et al. 2008), no radioexcess above the radioFIR correlation (0%, Sabater et al.2008), a small number of AGN (22%, Sabater et al. 2012), and lower molecular gas content (Lisenfeld et al. 2011). The late-type spiral majority in our sample show very small bulge/total ratios (largely <0.1) and Sersic indices consistent with an absence of classical bulges (Durbala et al. 2008). They show redder g-r colors and lower color dispersion for AMIGA subtypes (Fernandez-Lorenzo et al. 2012) and show the narrowest (gaussian) distribution of HI profile asymmetries of any sample yet studied. This work has been supported by Grant AYA2011-30491-C02-01 co-financed by MICINN and FEDER funds, and the Junta de Andalucia (Spain) grants P08-FQM-4205 and TIC-114.
NASA Astrophysics Data System (ADS)
Noble, P. J.; Van de Vijver, B.; Verleyen, E.; Prygiel, J.; Ivanovsky, A.; Lesven, L.; Billon, G.
2016-12-01
Diatom analysis was conducted on lake sediments in la Chaîne des Lacs (CDL), a shallow eutrophic urban park and storm control system in Villeneuve d'Ascq, France, to address both the present day water quality, and the evolution of this urban system over its 40 year history. The main lake, Lac du Héron (LDH), received recent attention because of water quality problems, including eutrophication, harmful algal blooms, and invasion by the macrophyte Elodea in 2012. A total of 17 sites were collected in CDL, 11 of which were in LDH, to document spatial variability, and a 26cm long core addresses historical changes. The bulk of the diatom assemblage in LDH can be classified as both eutrophic and moderately metal tolerant, using modern national diatom indices developed and used by the French regional water agencies. Surface sediment samples within LDH show large spatial variations in %Cocconeis placentula whose habitat is epiphytic growth on Elodea. Other variation is reflected in the phytoplankton composition both spatially, and interannually. Aulacoseira muzzanensis and Cyclostephanos dubius showed greater abundance in the open water habitats in LDH, whereas sites in CDL outside of LDH had greater Cyclotella meneghiniana. Temporally, Stephanodicsus (largely S. hantzschii), the dominant diatom in early spring, were present in greater abundances in the 2016 surface sediment samples than in any of the 2015 samples. One possible explanation is that the 2016 samples, taken March 30th, preferentially preserved the early spring Stephanodiscus bloom, in contrast to the 2015 samples, which were taken in January. The sediment core provides an historical record, where the uppermost 4cm plot with the bulk of the LDH surface samples and contain abundant Cocconeis, 4 -14cm is phytoplankton-rich, largely Cyclostephanos dubius and Aulacoseira muzzanensis, and represents a less weed-choked environment prior to the 2012 Elodea invasion. The base of the core is dominated by Amphora and Rhoicosphenia abbreviata, is most similar to an outlier site on the Marque River, and represents early conditions after the reservoir was established. Continued work will focus on relating these results to urban development, improvements of the sewage system, and meteorological patterns.
Superradiance in a Large and Dilute Cloud of Cold Atoms in the Linear-Optics Regime.
Araújo, Michelle O; Krešić, Ivor; Kaiser, Robin; Guerin, William
2016-08-12
Superradiance has been extensively studied in the 1970s and 1980s in the regime of superfluorescence, where a large number of atoms are initially excited. Cooperative scattering in the linear-optics regime, or "single-photon superradiance," has been investigated much more recently, and superradiant decay has also been predicted, even for a spherical sample of large extent and low density, where the distance between atoms is much larger than the wavelength. Here, we demonstrate this effect experimentally by directly measuring the decay rate of the off-axis fluorescence of a large and dilute cloud of cold rubidium atoms after the sudden switch off of a low-intensity laser driving the atomic transition. We show that, at large detuning, the decay rate increases with the on-resonance optical depth. In contrast to forward scattering, the superradiant decay of off-axis fluorescence is suppressed near resonance due to attenuation and multiple-scattering effects.
Yiin, Lih-Ming; Millette, James R; Vette, Alan; Ilacqua, Vito; Quan, Chunli; Gorczynski, John; Kendall, Michaela; Chen, Lung Chi; Weisel, Clifford P; Buckley, Brian; Yang, Ill; Lioy, Paul J
2004-05-01
The collapse of the World Trade Center (WTC) on September 11, 2001, generated large amounts of dust and smoke that settled in the surrounding indoor and outdoor environments in southern Manhattan. Sixteen dust samples were collected from undisturbed locations inside two uncleaned buildings that were adjacent to Ground Zero. These samples were analyzed for morphology, metals, and organic compounds, and the results were compared with the previously reported outdoor WTC dust/smoke results. We also analyzed seven additional dust samples provided by residents in the local neighborhoods. The morphologic analyses showed that the indoor WTC dust/smoke samples were similar to the outdoor WTC dust/smoke samples in composition and characteristics but with more than 50% mass in the <53-microm size fraction. This was in contrast to the outdoor samples that contained >50% of mass above >53 microm. Elemental analyses also showed the similarities, but at lower concentrations. Organic compounds present in the outdoor samples were also detected in the indoor samples. Conversely, the resident-provided convenience dust samples were different from either the WTC indoor or outdoor samples in composition and pH, indicating that they were not WTC-affected locations. In summary, the indoor dust/smoke was similar in concentration to the outdoor dust/smoke but had a greater percentage of mass <53 microm in diameter.
Liu, Zhenjiang; Zhang, Zhen; Zhu, Gangbing; Sun, Jianfan; Zou, Bin; Li, Ming; Wang, Jiagao
2016-05-01
A fast and sensitive polyclonal antibody-based enzyme-linked immunosorbent assay (ELISA) was developed for the analysis of flonicamid in environmental and agricultural samples. Two haptens of flonicamid differing in spacer arm length were synthesized and conjugated to proteins to be used as immunogens for the production of polyclonal antibodies. To obtain most sensitive combination of antibody/coating antigen, two antibodies were separately screened by homologous and heterologous assays. After optimization, the flonicamid ELISA showed that the 50% inhibitory concentration (IC50 value) was 3.86mgL(-1), and the limit of detection (IC20 value) was 0.032mgL(-1). There was no cross-reactivity to similar tested compounds. The recoveries obtained after the addition of standard flonicamid to the samples, including water, soil, carrot, apple and tomato, ranged from 79.3% to 116.4%. Moreover, the results of the ELISA for the spiked samples were largely consistent with the gas chromatography (R(2)=0.9891). The data showed that the proposed ELISA is an alternative tool for rapid, sensitive and accurate monitoring of flonicamid in environmental and agricultural samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Andersen, Keld Ejdrup; Bjergegaard, Charlotte; Møller, Peter; Sørensen, Jens Christian; Sørensen, Hilmer
2005-07-13
The contents of raffinose family oligosaccharides (RFO) and sucrose in Brassica, Lupinus, Pisum, and Hordeum species were investigated by chemometric principal component analysis (PCA). Hordeum samples contained sucrose and raffinose, and Brassica samples all contained sucrose, raffinose, and stachyose. In addition to these, the Pisum samples contained verbascose and the Lupinus samples also contained ajugose. High stachyose and low ajugose contents were found in Lupinus albus in contrast to Lupinus angustifolius, having low stachyose and high ajugose contents. Lupinus luteus had average stachyose and ajugose contents, whereas large amounts of verbascose were accumulated in these seeds. Lupinus mutabilis had high stachyose and low ajugose contents, similar to the composition in L. albus but showing higher raffinose content. The Brassica samples also showed compositional RFO variations within the species, and subgroup formations were discovered within the investigated Brassica napus varieties. PCA results indicated compositional variations between the investigated genera and within the various species of value as chemotaxonomic defined parameters and as tools in evaluations of authenticity/falsifications when RFO-containing plants are used as, for example, feed and food additives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ham, Kathryn J.; Vohra, Yogesh K.; Kono, Yoshio
Multi-angle energy-dispersive X-ray diffraction studies and white-beam X-ray radiography were conducted with a cylindrically shaped (1 mm diameter and 0.7 mm high) high-boron-content borosilicate glass sample (17.6% B 2O 3) to a pressure of 13.7 GPa using a Paris-Edinburgh (PE) press at Beamline 16-BM-B, HPCAT of the Advanced Photon Source. The measured structure factor S(q) to large q = 19 Å –1 is used to determine information about the internuclear bond distances between various species of atoms within the glass sample. Sample pressure was determined with gold as a pressure standard. The sample height as measured by radiography showed anmore » overall uniaxial compression of 22.5% at 13.7 GPa with 10.6% permanent compaction after decompression to ambient conditions. The reduced pair distribution function G(r) was extracted and Si–O, O–O and Si–Si bond distances were measured as a function of pressure. Lastly, Raman spectroscopy of the pressure recovered sample as compared to starting material showed blue-shift and changes in intensity and widths of Raman bands associated with silicate and four-coordinated boron.« less
Ham, Kathryn J.; Vohra, Yogesh K.; Kono, Yoshio; ...
2017-02-06
Multi-angle energy-dispersive X-ray diffraction studies and white-beam X-ray radiography were conducted with a cylindrically shaped (1 mm diameter and 0.7 mm high) high-boron-content borosilicate glass sample (17.6% B 2O 3) to a pressure of 13.7 GPa using a Paris-Edinburgh (PE) press at Beamline 16-BM-B, HPCAT of the Advanced Photon Source. The measured structure factor S(q) to large q = 19 Å –1 is used to determine information about the internuclear bond distances between various species of atoms within the glass sample. Sample pressure was determined with gold as a pressure standard. The sample height as measured by radiography showed anmore » overall uniaxial compression of 22.5% at 13.7 GPa with 10.6% permanent compaction after decompression to ambient conditions. The reduced pair distribution function G(r) was extracted and Si–O, O–O and Si–Si bond distances were measured as a function of pressure. Lastly, Raman spectroscopy of the pressure recovered sample as compared to starting material showed blue-shift and changes in intensity and widths of Raman bands associated with silicate and four-coordinated boron.« less
Composition profiling of seized ecstasy tablets by Raman spectroscopy.
Bell, S E; Burns, D T; Dennis, A C; Matchett, L J; Speers, J S
2000-10-01
Raman spectroscopy with far-red excitation has been investigated as a simple and rapid technique for composition profiling of seized ecstasy (MDMA, N-methyl-3,4-methylenedioxyamphetamine) tablets. The spectra obtained are rich in vibrational bands and allow the active drug and excipient used to bulk the tablets to be identified. Relative band heights can be used to determine drug/excipient ratios and the degree of hydration of the drug while the fact that 50 tablets per hour can be analysed allows large numbers of spectra to be recorded. The ability of Raman spectroscopy to distinguish between ecstasy tablets on the basis of their chemical composition is illustrated here by a sample set of 400 tablets taken from a large seizure of > 50,000 tablets that were found in eight large bags. The tablets are all similar in appearance and carry the same logo. Conventional analysis by GC-MS showed they contained MDMA. Initial Raman studies of samples from each of the eight bags showed that despite some tablet-to-tablet variation within each bag the contents could be classified on the basis of the excipients used. The tablets in five of the bags were sorbitol-based, two were cellulose-based and one bag contained tablets with a glucose excipient. More extensive analysis of 50 tablets from each of a representative series of sample bags have distribution profiles that showed the contents of each bag were approximately normally distributed about a mean value, rather than being mixtures of several discrete types. Two of the sorbitol-containing sample sets were indistinguishable while a third was similar but not identical to these, in that it contained the same excipient and MDMA with the same degree of hydration but had a slightly different MDMA/sorbitol ratio. The cellulose-based samples were badly manufactured and showed considerable tablet-to-tablet variation in their drug/excipient ratio while the glucose-based tablets had a tight distribution in their drug/excipient ratios. The degree of hydration in the MDMA feedstocks used to manufacture the cellulose-, glucose- and sorbitol-based tablets were all different from each other. This study, because it centres on a single seizure of physically similar tablets with the same active drug, highlights the fact that simple physical descriptions coupled with active drug content do not in themselves fully characterize the nature of the seized materials. There is considerable variation in the composition of the tablets within this single seizure and the fact that this variation can be detected from Raman spectra demonstrates that the potential benefits of obtaining highly detailed spectra can indeed translate into information that is not readily available from other methods but would be useful for tracing of drug distribution networks.
Structural, magnetic and electrical properties of a new double-perovskite LaNaMnMoO6 material.
Borchani, Sameh Megdiche; Koubaa, Wissem Cheikh-Rouhou; Megdiche, Makrem
2017-11-01
Structural, magnetic, magnetocaloric, electrical and magnetoresistance properties of an LaNaMnMoO 6 powder sample have been investigated by X-ray diffraction (XRD), magnetic and electrical measurements. Our sample has been synthesized using the ceramic method. Rietveld refinements of the XRD patterns show that our sample is single phase and it crystallizes in the orthorhombic structure with Pnma space group. Magnetization versus temperature in a magnetic applied field of 0.05 T shows that our sample exhibits a paramagnetic-ferromagnetic transition with decreasing temperature. The Curie temperature T C is found to be 320 K. Arrott plots show that all our double-perovskite oxides exhibit a second-order magnetic phase transition. From the measured magnetization data of an LaNaMnMoO 6 sample as a function of the magnetic applied field, the associated magnetic entropy change |-ΔSM| and the relative cooling power (RCP) have been determined. In the vicinity of T C , |-ΔSM| reached, in a magnetic applied field of 8 T, a maximum value of ∼4 J kg -1 K -1 . Our sample undergoes a large magnetocaloric effect at near-room temperature. Resistivity measurements reveal the presence of an insulating-metal transition at Tρ = 180 K. A magnetoresistance of 30% has been observed at room temperature for 6 T, significantly larger than that reported for the A 2 FeMoO 6 (A = Sr, Ba) double-perovskite system.
Myster, Randall W; Malahy, Michael P
2012-09-01
Spatial patterns of tropical trees and shrubs are important to understanding their interaction and the resultant structure of tropical rainforests. To assess this issue, we took advantage of previously collected data, on Neotropical tree and shrub stem identified to species and mapped for spatial coordinates in a 50ha plot, with a frequency of every five years and over a 20 year period. These stems data were first placed into four groups, regardless of species, depending on their location in the vertical strata of the rainforest (shrubs, understory trees, mid-sized trees, tall trees) and then used to generate aggregation patterns for each sampling year. We found shrubs and understory trees clumped at small spatial scales of a few meters for several of the years sampled. Alternatively, mid-sized trees and tall trees did not clump, nor did they show uniform (regular) patterns, during any sampling period. In general (1) groups found higher in the canopy did not show aggregation on the ground and (2) the spatial patterns of all four groups showed similarity among different sampling years, thereby supporting a "shifting mosaic" view of plant communities over large areas. Spatial analysis, such as this one, are critical to understanding and predicting tree spaces, tree-tree replacements and the Neotropical forest patterns, such as biodiversity and those needed for sustainability efforts, they produce.
Improving small-angle X-ray scattering data for structural analyses of the RNA world
Rambo, Robert P.; Tainer, John A.
2010-01-01
Defining the shape, conformation, or assembly state of an RNA in solution often requires multiple investigative tools ranging from nucleotide analog interference mapping to X-ray crystallography. A key addition to this toolbox is small-angle X-ray scattering (SAXS). SAXS provides direct structural information regarding the size, shape, and flexibility of the particle in solution and has proven powerful for analyses of RNA structures with minimal requirements for sample concentration and volumes. In principle, SAXS can provide reliable data on small and large RNA molecules. In practice, SAXS investigations of RNA samples can show inconsistencies that suggest limitations in the SAXS experimental analyses or problems with the samples. Here, we show through investigations on the SAM-I riboswitch, the Group I intron P4-P6 domain, 30S ribosomal subunit from Sulfolobus solfataricus (30S), brome mosaic virus tRNA-like structure (BMV TLS), Thermotoga maritima asd lysine riboswitch, the recombinant tRNAval, and yeast tRNAphe that many problems with SAXS experiments on RNA samples derive from heterogeneity of the folded RNA. Furthermore, we propose and test a general approach to reducing these sample limitations for accurate SAXS analyses of RNA. Together our method and results show that SAXS with synchrotron radiation has great potential to provide accurate RNA shapes, conformations, and assembly states in solution that inform RNA biological functions in fundamental ways. PMID:20106957
Leonard, Russell L.; Gray, Sharon K.; Alvarez, Carlos J.; ...
2015-05-21
In this paper, a fluorochlorozirconate (FCZ) glass-ceramic containing orthorhombic barium chloride crystals doped with divalent europium was evaluated for use as a storage phosphor in gamma-ray imaging. X-ray diffraction and phosphorimetry of the glass-ceramic sample showed the presence of a significant amount of orthorhombic barium chloride crystals in the glass matrix. Transmission electron microscopy and scanning electron microscopy were used to identify crystal size, structure, and morphology. The size of the orthorhombic barium chloride crystals in the FCZ glass matrix was very large, ~0.5–0.7 μm, which can limit image resolution. The FCZ glass-ceramic sample was exposed to 1 MeV gammamore » rays to determine its photostimulated emission characteristics at high energies, which were found to be suitable for imaging applications. Test images were made at 2 MeV energies using gap and step wedge phantoms. Gaps as small as 101.6 μm in a 440 stainless steel phantom were imaged using the sample imaging plate. Analysis of an image created using a depleted uranium step wedge phantom showed that emission is proportional to incident energy at the sample and the estimated absorbed dose. Finally, the results showed that the sample imaging plate has potential for gamma-ray-computed radiography and dosimetry applications.« less
Quality of life in small-scaled homelike nursing homes: an 8-month controlled trial.
Kok, Jeroen S; Nielen, Marjan M A; Scherder, Erik J A
2018-02-27
Quality of life is a clinical highly relevant outcome for residents with dementia. The question arises whether small scaled homelike facilities are associated with better quality of life than regular larger scale nursing homes do. A sample of 145 residents living in a large scale care facility were followed over 8 months. Half of the sample (N = 77) subsequently moved to a small scaled facility. Quality of life aspects were measured with the QUALIDEM and GIP before and after relocation. We found a significant Group x Time interaction on measures of anxiety meaning that residents who moved to small scale units became less anxious than residents who stayed on the regular care large-scale units. No significant differences were found on other aspects of quality of life. This study demonstrates that residents who move from a large scale facility to a small scale environment can improve an aspect of quality of life by showing a reduction in anxiety. Current Controlled Trials ISRCTN11151241 . registration date: 21-06-2017. Retrospectively registered.
Melchardt, Thomas; Hufnagl, Clemens; Weinstock, David M; Kopp, Nadja; Neureiter, Daniel; Tränkenschuh, Wolfgang; Hackl, Hubert; Weiss, Lukas; Rinnerthaler, Gabriel; Hartmann, Tanja N; Greil, Richard; Weigert, Oliver; Egle, Alexander
2016-08-09
Little information is available about the role of certain mutations for clonal evolution and the clinical outcome during relapse in diffuse large B-cell lymphoma (DLBCL). Therefore, we analyzed formalin-fixed-paraffin-embedded tumor samples from first diagnosis, relapsed or refractory disease from 28 patients using next-generation sequencing of the exons of 104 coding genes. Non-synonymous mutations were present in 74 of the 104 genes tested. Primary tumor samples showed a median of 8 non-synonymous mutations (range: 0-24) with the used gene set. Lower numbers of non-synonymous mutations in the primary tumor were associated with a better median OS compared with higher numbers (28 versus 15 months, p=0.031). We observed three patterns of clonal evolution during relapse of disease: large global change, subclonal selection and no or minimal change possibly suggesting preprogrammed resistance. We conclude that targeted re-sequencing is a feasible and informative approach to characterize the molecular pattern of relapse and it creates novel insights into the role of dynamics of individual genes.
Crowdsourcing for Cognitive Science – The Utility of Smartphones
Brown, Harriet R.; Zeidman, Peter; Smittenaar, Peter; Adams, Rick A.; McNab, Fiona; Rutledge, Robb B.; Dolan, Raymond J.
2014-01-01
By 2015, there will be an estimated two billion smartphone users worldwide. This technology presents exciting opportunities for cognitive science as a medium for rapid, large-scale experimentation and data collection. At present, cost and logistics limit most study populations to small samples, restricting the experimental questions that can be addressed. In this study we investigated whether the mass collection of experimental data using smartphone technology is valid, given the variability of data collection outside of a laboratory setting. We presented four classic experimental paradigms as short games, available as a free app and over the first month 20,800 users submitted data. We found that the large sample size vastly outweighed the noise inherent in collecting data outside a controlled laboratory setting, and show that for all four games canonical results were reproduced. For the first time, we provide experimental validation for the use of smartphones for data collection in cognitive science, which can lead to the collection of richer data sets and a significant cost reduction as well as provide an opportunity for efficient phenotypic screening of large populations. PMID:25025865
Wang, Song; Zhou, Ming; Chen, Taolin; Yang, Xun; Chen, Guangxiang; Wang, Meiyun; Gong, Qiyong
2017-04-18
Achievement in school is crucial for students to be able to pursue successful careers and lead happy lives in the future. Although many psychological attributes have been found to be associated with academic performance, the neural substrates of academic performance remain largely unknown. Here, we investigated the relationship between brain structure and academic performance in a large sample of high school students via structural magnetic resonance imaging (S-MRI) using voxel-based morphometry (VBM) approach. The whole-brain regression analyses showed that higher academic performance was related to greater regional gray matter density (rGMD) of the left dorsolateral prefrontal cortex (DLPFC), which is considered a neural center at the intersection of cognitive and non-cognitive functions. Furthermore, mediation analyses suggested that general intelligence partially mediated the impact of the left DLPFC density on academic performance. These results persisted even after adjusting for the effect of family socioeconomic status (SES). In short, our findings reveal a potential neuroanatomical marker for academic performance and highlight the role of general intelligence in explaining the relationship between brain structure and academic performance.
Crowdsourcing for cognitive science--the utility of smartphones.
Brown, Harriet R; Zeidman, Peter; Smittenaar, Peter; Adams, Rick A; McNab, Fiona; Rutledge, Robb B; Dolan, Raymond J
2014-01-01
By 2015, there will be an estimated two billion smartphone users worldwide. This technology presents exciting opportunities for cognitive science as a medium for rapid, large-scale experimentation and data collection. At present, cost and logistics limit most study populations to small samples, restricting the experimental questions that can be addressed. In this study we investigated whether the mass collection of experimental data using smartphone technology is valid, given the variability of data collection outside of a laboratory setting. We presented four classic experimental paradigms as short games, available as a free app and over the first month 20,800 users submitted data. We found that the large sample size vastly outweighed the noise inherent in collecting data outside a controlled laboratory setting, and show that for all four games canonical results were reproduced. For the first time, we provide experimental validation for the use of smartphones for data collection in cognitive science, which can lead to the collection of richer data sets and a significant cost reduction as well as provide an opportunity for efficient phenotypic screening of large populations.
Tyrannosaur paleobiology: new research on ancient exemplar organisms.
Brusatte, Stephen L; Norell, Mark A; Carr, Thomas D; Erickson, Gregory M; Hutchinson, John R; Balanoff, Amy M; Bever, Gabe S; Choiniere, Jonah N; Makovicky, Peter J; Xu, Xing
2010-09-17
Tyrannosaurs, the group of dinosaurian carnivores that includes Tyrannosaurus rex and its closest relatives, are icons of prehistory. They are also the most intensively studied extinct dinosaurs, and thanks to large sample sizes and an influx of new discoveries, have become ancient exemplar organisms used to study many themes in vertebrate paleontology. A phylogeny that includes recently described species shows that tyrannosaurs originated by the Middle Jurassic but remained mostly small and ecologically marginal until the latest Cretaceous. Anatomical, biomechanical, and histological studies of T. rex and other derived tyrannosaurs show that large tyrannosaurs could not run rapidly, were capable of crushing bite forces, had accelerated growth rates and keen senses, and underwent pronounced changes during ontogeny. The biology and evolutionary history of tyrannosaurs provide a foundation for comparison with other dinosaurs and living organisms.
Gyarmathy, V Anna; Johnston, Lisa G; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A
2014-02-01
Respondent driven sampling (RDS) and incentivized snowball sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania ("original sample") to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1-12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called "strudel effect" is discussed in the paper. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Enhancement of Cloud Cover and Suppression of Nocturnal Drizzle in Stratocumulus Polluted by Haze
NASA Technical Reports Server (NTRS)
Ackerman, Andrew S.; Toon, O. B.; Stevens, D. E.; Coakley, J. A., Jr.; Gore, Warren J. (Technical Monitor)
2002-01-01
Recent satellite observations indicate a significant decrease of cloud water in ship tracks, in contrast to an ensemble of in situ ship-track measurements that show no average change in cloud water relative to the surrounding clouds. We find through large-eddy simulations of stratocumulus that the trend in the satellite data is likely an artifact of sampling only overcast clouds. The simulations instead show cloud cover increasing with droplet concentrations. Our simulations also show that increases in cloud water from drizzle suppression (by increasing droplet concentrations) are favored at night or at extremely low droplet concentrations.
Photometric Determination of Ammonium and Phosphate in Seawater Medium Using a Microplate Reader.
Ruppersberg, Hanna S; Goebel, Maren R; Kleinert, Svea I; Wünsch, Daniel; Trautwein, Kathleen; Rabus, Ralf
2017-01-01
To more efficiently process the large sample numbers for quantitative determination of ammonium (NH4+) and phosphate (orthophosphate, PO43-) generated during comprehensive growth experiments with the marine Roseobacter group member Phaeobacter inhibens DSM 17395, specific colorimetric assays employing a microplate reader (MPR) were established. The NH4+ assay is based on the reaction of NH4+ with hypochlorite and salicylate, yielding a limit of detection of 14 µM, a limit of quantitation of 36 µM, and a linear range for quantitative determination up to 200 µM. The PO43-assay is based on the complex formation of PO43- with ammonium molybdate in the presence of ascorbate and zinc acetate, yielding a limit of detection of 13 µM, a limit of quantitation of 50 µM, and a linear range for quantitative determination up to 1 mM. Both MPR-based assays allowed for fast (significantly lower than 1 h) analysis of 21 samples plus standards for calibration (all measured in triplicates) and showed only low variation across a large collection of biological samples. © 2017 S. Karger AG, Basel.
A DNA methylation map of human cancer at single base-pair resolution
Vidal, E; Sayols, S; Moran, S; Guillaumet-Adkins, A; Schroeder, M P; Royo, R; Orozco, M; Gut, M; Gut, I; Lopez-Bigas, N; Heyn, H; Esteller, M
2017-01-01
Although single base-pair resolution DNA methylation landscapes for embryonic and different somatic cell types provided important insights into epigenetic dynamics and cell-type specificity, such comprehensive profiling is incomplete across human cancer types. This prompted us to perform genome-wide DNA methylation profiling of 22 samples derived from normal tissues and associated neoplasms, including primary tumors and cancer cell lines. Unlike their invariant normal counterparts, cancer samples exhibited highly variable CpG methylation levels in a large proportion of the genome, involving progressive changes during tumor evolution. The whole-genome sequencing results from selected samples were replicated in a large cohort of 1112 primary tumors of various cancer types using genome-scale DNA methylation analysis. Specifically, we determined DNA hypermethylation of promoters and enhancers regulating tumor-suppressor genes, with potential cancer-driving effects. DNA hypermethylation events showed evidence of positive selection, mutual exclusivity and tissue specificity, suggesting their active participation in neoplastic transformation. Our data highlight the extensive changes in DNA methylation that occur in cancer onset, progression and dissemination. PMID:28581523
ALMA observation of high-z extreme star-forming environments discovered by Planck/Herschel
NASA Astrophysics Data System (ADS)
Kneissl, R.
2015-05-01
The Comic Microwave Background satellite Planck with its High Frequency Instrument has surveyed the mm/sub-mm sky in six frequency channels from 100 to 900 GHz. A sample of 228 cold sources of the Cosmic Infrared Background was observed in follow-up with Herschel SPIRE. The majority of sources appear to be over-densities of star-forming galaxies matching the size of high-z proto-cluster regions, while a 3% fraction are individual bright, lensed galaxies. A large observing program is underway with the aim of resolving the regions into the constituent members of the Planck sources. First ALMA data have been received on one Planck/Herschel proto-cluster candidate, showing the expected large over-abundance of bright mm/sub-mm sources within the cluster region. ALMA long baseline data of the brightest lensed galaxy in the sample with > 1 Jy at 350 μm are also forthcoming.
Langbein, J.; Bock, Y.
2004-01-01
A network of 13 continuous GPS stations near Parkfield, California has been converted from 30 second to 1 second sampling with positions of the stations estimated in real-time relative to a master station. Most stations are near the trace of the San Andreas fault, which exhibits creep. The noise spectra of the instantaneous 1 Hz positions show flicker noise at high frequencies and change to frequency independence at low frequencies; the change in character occurs between 6 to 8 hours. Our analysis indicates that 1-second sampled GPS can estimate horizontal displacements of order 6 mm at the 99% confidence level from a few seconds to a few hours. High frequency GPS can augment existing measurements in capturing large creep events and postseismic slip that would exceed the range of existing creepmeters, and can detect large seismic displacements. Copyright 2004 by the American Geophysical Union.
A large-scale study of the random variability of a coding sequence: a study on the CFTR gene.
Modiano, Guido; Bombieri, Cristina; Ciminelli, Bianca Maria; Belpinati, Francesca; Giorgi, Silvia; Georges, Marie des; Scotet, Virginie; Pompei, Fiorenza; Ciccacci, Cinzia; Guittard, Caroline; Audrézet, Marie Pierre; Begnini, Angela; Toepfer, Michael; Macek, Milan; Ferec, Claude; Claustres, Mireille; Pignatti, Pier Franco
2005-02-01
Coding single nucleotide substitutions (cSNSs) have been studied on hundreds of genes using small samples (n(g) approximately 100-150 genes). In the present investigation, a large random European population sample (average n(g) approximately 1500) was studied for a single gene, the CFTR (Cystic Fibrosis Transmembrane conductance Regulator). The nonsynonymous (NS) substitutions exhibited, in accordance with previous reports, a mean probability of being polymorphic (q > 0.005), much lower than that of the synonymous (S) substitutions, but they showed a similar rate of subpolymorphic (q < 0.005) variability. This indicates that, in autosomal genes that may have harmful recessive alleles (nonduplicated genes with important functions), genetic drift overwhelms selection in the subpolymorphic range of variability, making disadvantageous alleles behave as neutral. These results imply that the majority of the subpolymorphic nonsynonymous alleles of these genes are selectively negative or even pathogenic.
Vasconcelos-Raposo, José; Fernandes, Helder Miguel; Teixeira, Carla M
2013-01-01
The purpose of the present study was to assess the factor structure and reliability of the Depression, Anxiety and Stress Scales (DASS-21) in a large Portuguese community sample. Participants were 1020 adults (585 women and 435 men), with a mean age of 36.74 (SD = 11.90) years. All scales revealed good reliability, with Cronbach's alpha values between .80 (anxiety) and .84 (depression). The internal consistency of the total score was .92. Confirmatory factor analysis revealed that the best-fitting model (*CFI = .940, *RMSEA = .038) consisted of a latent component of general psychological distress (or negative affectivity) plus orthogonal depression, anxiety and stress factors. The Portuguese version of the DASS-21 showed good psychometric properties (factorial validity and reliability) and thus can be used as a reliable and valid instrument for measuring depression, anxiety and stress symptoms.
Gains following perceptual learning are closely linked to the initial visual acuity.
Yehezkel, Oren; Sterkin, Anna; Lev, Maria; Levi, Dennis M; Polat, Uri
2016-04-28
The goal of the present study was to evaluate the dependence of perceptual learning gains on initial visual acuity (VA), in a large sample of subjects with a wide range of VAs. A large sample of normally sighted and presbyopic subjects (N = 119; aged 40 to 63) with a wide range of uncorrected near visual acuities (VA, -0.12 to 0.8 LogMAR), underwent perceptual learning. Training consisted of detecting briefly presented Gabor stimuli under spatial and temporal masking conditions. Consistent with previous findings, perceptual learning induced a significant improvement in near VA and reading speed under conditions of limited exposure duration. Our results show that the improvements in VA and reading speed observed following perceptual learning are closely linked to the initial VA, with only a minor fraction of the observed improvement that may be attributed to the additional sessions performed by those with the worse VA.
Haghshenas, Maryam; Akbari, Mohammad Taghi; Karizi, Shohreh Zare; Deilamani, Faravareh Khordadpoor; Nafissi, Shahriar; Salehi, Zivar
2016-06-01
Duchenne and Becker muscular dystrophies (DMD and BMD) are X-linked neuromuscular diseases characterized by progressive muscular weakness and degeneration of skeletal muscles. Approximately two-thirds of the patients have large deletions or duplications in the dystrophin gene and the remaining one-third have point mutations. This study was performed to evaluate point mutations in Iranian DMD/BMD male patients. A total of 29 DNA samples from patients who did not show any large deletion/duplication mutations following multiplex polymerase chain reaction (PCR) and multiplex ligation-dependent probe amplification (MLPA) screening were sequenced for detection of point mutations in exons 50-79. Also exon 44 was sequenced in one sample in which a false positive deletion was detected by MLPA method. Cycle sequencing revealed four nonsense, one frameshift and two splice site mutations as well as two missense variants.
Rearing Environmental Influences on Religiousness: An Investigation of Adolescent Adoptees.
Koenig, Laura B; McGue, Matt; Iacono, William G
2009-10-01
Religiousness is widely considered to be a culturally transmitted trait. However, twin studies suggest that religiousness is genetically influenced in adulthood, although largely environmentally influenced in childhood/adolescence. We examined genetic and environmental influences on a self-report measure of religiousness in a sample consisting of 284 adoptive families (two adopted adolescent siblings and their rearing parents); 208 biological families (two full biological adolescent siblings and their parents); and 124 mixed families (one adopted and one biological adolescent sibling and their parents). A sibling-family model was fit to the data to estimate genetic, shared environmental, and nonshared environmental effects on religiousness, as well as cultural transmission and assortative mating effects. Religiousness showed little evidence of heritability and large environmental effects, which did not vary significantly by gender. This finding is consistent with the results of twin studies of religiousness in adolescent and preadolescent samples.
Rearing Environmental Influences on Religiousness: An Investigation of Adolescent Adoptees
Koenig, Laura B.; McGue, Matt; Iacono, William G.
2009-01-01
Religiousness is widely considered to be a culturally transmitted trait. However, twin studies suggest that religiousness is genetically influenced in adulthood, although largely environmentally influenced in childhood/adolescence. We examined genetic and environmental influences on a self-report measure of religiousness in a sample consisting of 284 adoptive families (two adopted adolescent siblings and their rearing parents); 208 biological families (two full biological adolescent siblings and their parents); and 124 mixed families (one adopted and one biological adolescent sibling and their parents). A sibling-family model was fit to the data to estimate genetic, shared environmental, and nonshared environmental effects on religiousness, as well as cultural transmission and assortative mating effects. Religiousness showed little evidence of heritability and large environmental effects, which did not vary significantly by gender. This finding is consistent with the results of twin studies of religiousness in adolescent and preadolescent samples. PMID:20161346
Detecting a Weak Association by Testing its Multiple Perturbations: a Data Mining Approach
NASA Astrophysics Data System (ADS)
Lo, Min-Tzu; Lee, Wen-Chung
2014-05-01
Many risk factors/interventions in epidemiologic/biomedical studies are of minuscule effects. To detect such weak associations, one needs a study with a very large sample size (the number of subjects, n). The n of a study can be increased but unfortunately only to an extent. Here, we propose a novel method which hinges on increasing sample size in a different direction-the total number of variables (p). We construct a p-based `multiple perturbation test', and conduct power calculations and computer simulations to show that it can achieve a very high power to detect weak associations when p can be made very large. As a demonstration, we apply the method to analyze a genome-wide association study on age-related macular degeneration and identify two novel genetic variants that are significantly associated with the disease. The p-based method may set a stage for a new paradigm of statistical tests.
Capers, Patrice L.; Brown, Andrew W.; Dawson, John A.; Allison, David B.
2015-01-01
Background: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing) has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. Objective: To evaluate the use of double sampling combined with multiple imputation (DS + MI) to address meta-research questions, using as an example adherence of PubMed entries to two simple consolidated standards of reporting trials guidelines for titles and abstracts. Methods: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT, human, abstract available, and English language (n = 322, 107). For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI) method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO) human rating method. Multiple imputation of the missing-completely at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. Results: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title = 1.00, abstract = 0.92). Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS + MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by year: subsample RHITLO 1.050–1.174 vs. DS + MI 1.082–1.151). As evidence of improved accuracy, DS + MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. Conclusion: Our results support our hypothesis that DS + MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of literature. PMID:25988135
Extensive Core Microbiome in Drone-Captured Whale Blow Supports a Framework for Health Monitoring
Miller, Carolyn A.; Moore, Michael J.; Durban, John W.; Fearnbach, Holly; Barrett-Lennard, Lance G.
2017-01-01
ABSTRACT The pulmonary system is a common site for bacterial infections in cetaceans, but very little is known about their respiratory microbiome. We used a small, unmanned hexacopter to collect exhaled breath condensate (blow) from two geographically distinct populations of apparently healthy humpback whales (Megaptera novaeangliae), sampled in the Massachusetts coastal waters off Cape Cod (n = 17) and coastal waters around Vancouver Island (n = 9). Bacterial and archaeal small-subunit rRNA genes were amplified and sequenced from blow samples, including many of sparse volume, as well as seawater and other controls, to characterize the associated microbial community. The blow microbiomes were distinct from the seawater microbiomes and included 25 phylogenetically diverse bacteria common to all sampled whales. This core assemblage comprised on average 36% of the microbiome, making it one of the more consistent animal microbiomes studied to date. The closest phylogenetic relatives of 20 of these core microbes were previously detected in marine mammals, suggesting that this core microbiome assemblage is specialized for marine mammals and may indicate a healthy, noninfected pulmonary system. Pathogen screening was conducted on the microbiomes at the genus level, which showed that all blow and few seawater microbiomes contained relatives of bacterial pathogens; no known cetacean respiratory pathogens were detected in the blow. Overall, the discovery of a shared large core microbiome in humpback whales is an important advancement for health and disease monitoring of this species and of other large whales. IMPORTANCE The conservation and management of large whales rely in part upon health monitoring of individuals and populations, and methods generally necessitate invasive sampling. Here, we used a small, unmanned hexacopter drone to noninvasively fly above humpback whales from two populations, capture their exhaled breath (blow), and examine the associated microbiome. In the first extensive examination of the large-whale blow microbiome, we present surprising results about the discovery of a large core microbiome that was shared across individual whales from geographically separated populations in two ocean basins. We suggest that this core microbiome, in addition to other microbiome characteristics, could be a useful feature for health monitoring of large whales worldwide. PMID:29034331
Extensive Core Microbiome in Drone-Captured Whale Blow Supports a Framework for Health Monitoring.
Apprill, Amy; Miller, Carolyn A; Moore, Michael J; Durban, John W; Fearnbach, Holly; Barrett-Lennard, Lance G
2017-01-01
The pulmonary system is a common site for bacterial infections in cetaceans, but very little is known about their respiratory microbiome. We used a small, unmanned hexacopter to collect exhaled breath condensate (blow) from two geographically distinct populations of apparently healthy humpback whales ( Megaptera novaeangliae ), sampled in the Massachusetts coastal waters off Cape Cod ( n = 17) and coastal waters around Vancouver Island ( n = 9). Bacterial and archaeal small-subunit rRNA genes were amplified and sequenced from blow samples, including many of sparse volume, as well as seawater and other controls, to characterize the associated microbial community. The blow microbiomes were distinct from the seawater microbiomes and included 25 phylogenetically diverse bacteria common to all sampled whales. This core assemblage comprised on average 36% of the microbiome, making it one of the more consistent animal microbiomes studied to date. The closest phylogenetic relatives of 20 of these core microbes were previously detected in marine mammals, suggesting that this core microbiome assemblage is specialized for marine mammals and may indicate a healthy, noninfected pulmonary system. Pathogen screening was conducted on the microbiomes at the genus level, which showed that all blow and few seawater microbiomes contained relatives of bacterial pathogens; no known cetacean respiratory pathogens were detected in the blow. Overall, the discovery of a shared large core microbiome in humpback whales is an important advancement for health and disease monitoring of this species and of other large whales. IMPORTANCE The conservation and management of large whales rely in part upon health monitoring of individuals and populations, and methods generally necessitate invasive sampling. Here, we used a small, unmanned hexacopter drone to noninvasively fly above humpback whales from two populations, capture their exhaled breath (blow), and examine the associated microbiome. In the first extensive examination of the large-whale blow microbiome, we present surprising results about the discovery of a large core microbiome that was shared across individual whales from geographically separated populations in two ocean basins. We suggest that this core microbiome, in addition to other microbiome characteristics, could be a useful feature for health monitoring of large whales worldwide.
Foster, Gregory D.; Gates, Paul M.; Foreman, William T.; McKenzie, Stuart W.; Rinella, Frank A.
1993-01-01
Concentrations of pesticides in the dissolved phase of surface water samples from the Yakima River basin, WA, were determined using preconcentration in the Goulden large-sample extractor (GLSE) and gas chromatography/mass spectrometry (GC/MS) analysis. Sample volumes ranging from 10 to 120 L were processed with the GLSE, and the results from the large-sample analyses were compared to those derived from 1-L continuous liquid-liquid extractions Few of the 40 target pesticides were detected in 1-L samples, whereas large-sample preconcentration in the GLSE provided detectable levels for many of the target pesticides. The number of pesticides detected in GLSE processed samples was usually directly proportional to sample volume, although the measured concentrations of the pesticides were generally lower at the larger sample volumes for the same water source. The GLSE can be used to provide lower detection levels relative to conventional liquid-liquid extraction in GC/MS analysis of pesticides in samples of surface water.
Evaluation of residual uranium contamination in the dirt floor of an abandoned metal rolling mill.
Glassford, Eric; Spitz, Henry; Lobaugh, Megan; Spitler, Grant; Succop, Paul; Rice, Carol
2013-02-01
A single, large, bulk sample of uranium-contaminated material from the dirt floor of an abandoned metal rolling mill was separated into different types and sizes of aliquots to simulate samples that would be collected during site remediation. The facility rolled approximately 11,000 tons of hot-forged ingots of uranium metal approximately 60 y ago, and it has not been used since that time. Thirty small mass (≈ 0.7 g) and 15 large mass (≈ 70 g) samples were prepared from the heterogeneously contaminated bulk material to determine how measurements of the uranium contamination vary with sample size. Aliquots of bulk material were also resuspended in an exposure chamber to produce six samples of respirable particles that were obtained using a cascade impactor. Samples of removable surface contamination were collected by wiping 100 cm of the interior surfaces of the exposure chamber with 47-mm-diameter fiber filters. Uranium contamination in each of the samples was measured directly using high-resolution gamma ray spectrometry. As expected, results for isotopic uranium (i.e., U and U) measured with the large-mass and small-mass samples are significantly different (p < 0.001), and the coefficient of variation (COV) for the small-mass samples was greater than for the large-mass samples. The uranium isotopic concentrations measured in the air and on the wipe samples were not significantly different and were also not significantly different (p > 0.05) from results for the large- or small-mass samples. Large-mass samples are more reliable for characterizing heterogeneously distributed radiological contamination than small-mass samples since they exhibit the least variation compared to the mean. Thus, samples should be sufficiently large in mass to insure that the results are truly representative of the heterogeneously distributed uranium contamination present at the facility. Monitoring exposure of workers and the public as a result of uranium contamination resuspended during site remediation should be evaluated using samples of sufficient size and type to accommodate the heterogeneous distribution of uranium in the bulk material.
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Fast Ordered Sampling of DNA Sequence Variants.
Greenberg, Anthony J
2018-05-04
Explosive growth in the amount of genomic data is matched by increasing power of consumer-grade computers. Even applications that require powerful servers can be quickly tested on desktop or laptop machines if we can generate representative samples from large data sets. I describe a fast and memory-efficient implementation of an on-line sampling method developed for tape drives 30 years ago. Focusing on genotype files, I test the performance of this technique on modern solid-state and spinning hard drives, and show that it performs well compared to a simple sampling scheme. I illustrate its utility by developing a method to quickly estimate genome-wide patterns of linkage disequilibrium (LD) decay with distance. I provide open-source software that samples loci from several variant format files, a separate program that performs LD decay estimates, and a C++ library that lets developers incorporate these methods into their own projects. Copyright © 2018 Greenberg.
Simulating and assessing boson sampling experiments with phase-space representations
NASA Astrophysics Data System (ADS)
Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.
2018-04-01
The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.
Coscollá, Mireia; Gosalbes, María José; Catalán, Vicente; González-Candelas, Fernando
2006-06-01
Legionella pneumophila is associated to recurrent outbreaks in several Comunidad Valenciana (Spain) localities, especially in Alcoi, where social and climatic conditions seem to provide an excellent environment for bacterial growth. We have analysed the nucleotide sequences of three loci from 25 environmental isolates from Alcoi and nearby locations sampled over 3 years. The analysis of these isolates has revealed a substantial level of genetic variation, with consistent patterns of variability across loci, and comparable to that found in a large, European-wide sampling of clinical isolates. Among the tree loci studied, fliC showed the highest level of nucleotide diversity. The analysis of isolates sampled in different years revealed a clear differentiation, with samples from 2001 being significantly distinct from those obtained in 2002 and 2003. Furthermore, although linkage disequilibrium measures indicate a clonal nature for population structure in this sample, the presence of some recombination events cannot be ruled out.
X-ray simulations method for the large field of view
NASA Astrophysics Data System (ADS)
Schelokov, I. A.; Grigoriev, M. V.; Chukalina, M. V.; Asadchikov, V. E.
2018-03-01
In the standard approach, X-ray simulation is usually limited to the step of spatial sampling to calculate the convolution of integrals of the Fresnel type. Explicitly the sampling step is determined by the size of the last Fresnel zone in the beam aperture. In other words, the spatial sampling is determined by the precision of integral convolution calculations and is not connected with the space resolution of an optical scheme. In the developed approach the convolution in the normal space is replaced by computations of the shear strain of ambiguity function in the phase space. The spatial sampling is then determined by the space resolution of an optical scheme. The sampling step can differ in various directions because of the source anisotropy. The approach was used to simulate original images in the X-ray Talbot interferometry and showed that the simulation can be applied to optimize the methods of postprocessing.
Pozzebon, Julie A.; Visser, Beth A.; Ashton, Michael C.; Lee, Kibeom; Goldberg, Lewis R.
2009-01-01
We investigated the psychometric properties of the Oregon Vocational Interest Scales (ORVIS), a brief public-domain alternative to commercial inventories, in a large community sample and in a college sample. In both samples, we examined the factor structure, scale intercorrelations, and personality correlates of the ORVIS, and in the community sample we also examined the correlations of the ORVIS scales with cognitive abilities and with the scales of a longer, proprietary interest survey. In both samples, all eight scales—Leadership, Organization, Altruism, Creativity, Analysis, Producing, Adventuring, and Erudition—showed wide variation in scores, high internal-consistency reliabilities, and a pattern of high convergent and low discriminant correlations with the scales of the proprietary interest survey. Overall, the results support the construct validity of the scales, which are recommended for use in research on vocational interests and other individual differences. PMID:20155566
Pozzebon, Julie A; Visser, Beth A; Ashton, Michael C; Lee, Kibeom; Goldberg, Lewis R
2010-03-01
We investigated the psychometric properties of the Oregon Vocational Interest Scales (ORVIS), a brief public-domain alternative to commercial inventories, in a large community sample and in a college sample. In both samples, we examined the factor structure, scale intercorrelations, and personality correlates of the ORVIS, and in the community sample, we also examined the correlations of the ORVIS scales with cognitive abilities and with the scales of a longer, proprietary interest survey. In both samples, all 8 scales-Leadership, Organization, Altruism, Creativity, Analysis, Producing, Adventuring, and Erudition-showed wide variation in scores, high internal-consistency reliabilities, and a pattern of high convergent and low discriminant correlations with the scales of the proprietary interest survey. Overall, the results support the construct validity of the scales, which are recommended for use in research on vocational interests and other individual differences.
Avelar, Daniel M; Linardi, Pedro M
2010-09-15
The recently developed Multiple Displacement Amplification technique (MDA) allows for the production of a large quantity of high quality genomic DNA from low amounts of the original DNA. The goal of this study was to evaluate the performance of the MDA technique to amplify genomic DNA of siphonapterids that have been stored for long periods in 70% ethanol at room temperature. We subjected each DNA sample to two different methodologies: (1) amplification of mitochondrial 16S sequences without MDA; (2) amplification of 16S after MDA. All the samples obtained from these procedures were then sequenced. Only 4 samples (15.4%) subjected to method 1 showed amplification. In contrast, the application of MDA (method 2) improved the performance substantially, with 24 samples (92.3%) showing amplification, with significant difference. Interestingly, one of the samples successfully amplified with this method was originally collected in 1909. All of the sequenced samples displayed satisfactory results in quality evaluations (Phred ≥ 20) and good similarities, as identified with the BLASTn tool. Our results demonstrate that the use of MDA may be an effective tool in molecular studies involving specimens of fleas that have traditionally been considered inadequately preserved for such purposes.
2010-01-01
The recently developed Multiple Displacement Amplification technique (MDA) allows for the production of a large quantity of high quality genomic DNA from low amounts of the original DNA. The goal of this study was to evaluate the performance of the MDA technique to amplify genomic DNA of siphonapterids that have been stored for long periods in 70% ethanol at room temperature. We subjected each DNA sample to two different methodologies: (1) amplification of mitochondrial 16S sequences without MDA; (2) amplification of 16S after MDA. All the samples obtained from these procedures were then sequenced. Only 4 samples (15.4%) subjected to method 1 showed amplification. In contrast, the application of MDA (method 2) improved the performance substantially, with 24 samples (92.3%) showing amplification, with significant difference. Interestingly, one of the samples successfully amplified with this method was originally collected in 1909. All of the sequenced samples displayed satisfactory results in quality evaluations (Phred ≥ 20) and good similarities, as identified with the BLASTn tool. Our results demonstrate that the use of MDA may be an effective tool in molecular studies involving specimens of fleas that have traditionally been considered inadequately preserved for such purposes. PMID:20840790
Cheng, Lei; Wu, Cheng Hao; Jarry, Angelique; Chen, Wei; Ye, Yifan; Zhu, Junfa; Kostecki, Robert; Persson, Kristin; Guo, Jinghua; Salmeron, Miquel; Chen, Guoying; Doeff, Marca
2015-08-19
The interfacial resistances of symmetrical lithium cells containing Al-substituted Li7La3Zr2O12 (LLZO) solid electrolytes are sensitive to their microstructures and histories of exposure to air. Air exposure of LLZO samples with large grain sizes (∼150 μm) results in dramatically increased interfacial impedances in cells containing them, compared to those with pristine large-grained samples. In contrast, a much smaller difference is seen between cells with small-grained (∼20 μm) pristine and air-exposed LLZO samples. A combination of soft X-ray absorption (sXAS) and Raman spectroscopy, with probing depths ranging from nanometer to micrometer scales, revealed that the small-grained LLZO pellets are more air-stable than large-grained ones, forming far less surface Li2CO3 under both short- and long-term exposure conditions. Surface sensitive X-ray photoelectron spectroscopy (XPS) indicates that the better chemical stability of the small-grained LLZO is related to differences in the distribution of Al and Li at sample surfaces. Density functional theory calculations show that LLZO can react via two different pathways to form Li2CO3. The first, more rapid, pathway involves a reaction with moisture in air to form LiOH, which subsequently absorbs CO2 to form Li2CO3. The second, slower, pathway involves direct reaction with CO2 and is favored when surface lithium contents are lower, as with the small-grained samples. These observations have important implications for the operation of solid-state lithium batteries containing LLZO because the results suggest that the interfacial impedances of these devices is critically dependent upon specific characteristics of the solid electrolyte and how it is prepared.
Wieding, Jan; Fritsche, Andreas; Heinl, Peter; Körner, Carolin; Cornelsen, Matthias; Seitz, Hermann; Mittelmeier, Wolfram; Bader, Rainer
2013-12-16
The repair of large segmental bone defects caused by fracture, tumor or infection remains challenging in orthopedic surgery. The capability of two different bone scaffold materials, sintered tricalciumphosphate and a titanium alloy (Ti6Al4V), were determined by mechanical and biomechanical testing. All scaffolds were fabricated by means of additive manufacturing techniques with identical design and controlled pore geometry. Small-sized sintered TCP scaffolds (10 mm diameter, 21 mm length) were fabricated as dense and open-porous samples and tested in an axial loading procedure. Material properties for titanium alloy were determined by using both tensile (dense) and compressive test samples (open-porous). Furthermore, large-sized open-porous TCP and titanium alloy scaffolds (30 mm in height and diameter, 700 µm pore size) were tested in a biomechanical setup simulating a large segmental bone defect using a composite femur stabilized with an osteosynthesis plate. Static physiologic loads (1.9 kN) were applied within these tests. Ultimate compressive strength of the TCP samples was 11.2 ± 0.7 MPa and 2.2 ± 0.3 MPa, respectively, for the dense and the open-porous samples. Tensile strength and ultimate compressive strength was 909.8 ± 4.9 MPa and 183.3 ± 3.7 MPa, respectively, for the dense and the open-porous titanium alloy samples. Furthermore, the biomechanical results showed good mechanical stability for the titanium alloy scaffolds. TCP scaffolds failed at 30% of the maximum load. Based on recent data, the 3D printed TCP scaffolds tested cannot currently be recommended for high load-bearing situations. Scaffolds made of titanium could be optimized by adapting the biomechanical requirements.
Gyarmathy, V. Anna; Johnston, Lisa G.; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A.
2014-01-01
Background Respondent driven sampling (RDS) and Incentivized Snowball Sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). Methods We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania (“original sample”) to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. Results The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1 to 12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. Conclusions When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called “strudel effect” is discussed in the paper. PMID:24360650
Magnetostrictive performance of additively manufactured CoFe rods using the LENS (TM) system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Nicholas J.; Yoo, Jin-Hyeong; Ott, Ryan T.
Magnetostrictive materials exhibit a strain in the presence of a variable magnetic field. While they normally require large, highly oriented crystallographic grains for high strain values, metal additive manufacturing (3D printing) may be able to produce highly textured polycrystalline rods, with properties comparable to those manufactured using the more demanding free standing zone melting (FSZM) technique. Rods of Co 75.8Fe 24.2 and Co 63.7Fe 36.3 have been fabricated using the Laser engineered net shaping (LENS TM) system to evaluate the performance of additively manufactured magnetic and magnetostrictive materials. The 76% Co sample showed an average magnetostriction (λ) of 86 ppmmore » at a stress of 124 MPa; in contrast, the 64% Co sample showed only 27 ppm at the same stress. For direct comparison, a Co 67Fe 33 single crystal disk, also measured as part of this study, exhibited a magnetostriction value of 131 and 91 microstrain in the [100] and [111] directions, respectively, with a calculated polycrystalline value (λ s) of 107 microstrain. Electron back scattered diffraction (EBSD) has been used to qualitatively link the performance with crystallographic orientation and phase information, showing only the BCC phase in the 76% Co sample, but three different phases (BCC, FCC, and HCP) in the 64% Co sample.« less
Magnetostrictive performance of additively manufactured CoFe rods using the LENSTM system
NASA Astrophysics Data System (ADS)
Jones, Nicholas J.; Yoo, Jin-Hyeong; Ott, Ryan T.; Lambert, Paul K.; Petculescu, Gabriela; Simsek, Emrah; Schlagel, Deborah; Lograsso, Thomas A.
2018-05-01
Magnetostrictive materials exhibit a strain in the presence of a variable magnetic field. While they normally require large, highly oriented crystallographic grains for high strain values, metal additive manufacturing (3D printing) may be able to produce highly textured polycrystalline rods, with properties comparable to those manufactured using the more demanding free standing zone melting (FSZM) technique. Rods of Co75.8Fe24.2 and Co63.7Fe36.3 have been fabricated using the Laser engineered net shaping (LENSTM) system to evaluate the performance of additively manufactured magnetic and magnetostrictive materials. The 76% Co sample showed an average magnetostriction (λ) of 86 ppm at a stress of 124 MPa; in contrast, the 64% Co sample showed only 27 ppm at the same stress. For direct comparison, a Co67Fe33 single crystal disk, also measured as part of this study, exhibited a magnetostriction value of 131 and 91 microstrain in the [100] and [111] directions, respectively, with a calculated polycrystalline value (λs) of 107 microstrain. Electron back scattered diffraction (EBSD) has been used to qualitatively link the performance with crystallographic orientation and phase information, showing only the BCC phase in the 76% Co sample, but three different phases (BCC, FCC, and HCP) in the 64% Co sample.
Magnetostrictive performance of additively manufactured CoFe rods using the LENS (TM) system
Jones, Nicholas J.; Yoo, Jin-Hyeong; Ott, Ryan T.; ...
2018-05-01
Magnetostrictive materials exhibit a strain in the presence of a variable magnetic field. While they normally require large, highly oriented crystallographic grains for high strain values, metal additive manufacturing (3D printing) may be able to produce highly textured polycrystalline rods, with properties comparable to those manufactured using the more demanding free standing zone melting (FSZM) technique. Rods of Co 75.8Fe 24.2 and Co 63.7Fe 36.3 have been fabricated using the Laser engineered net shaping (LENS TM) system to evaluate the performance of additively manufactured magnetic and magnetostrictive materials. The 76% Co sample showed an average magnetostriction (λ) of 86 ppmmore » at a stress of 124 MPa; in contrast, the 64% Co sample showed only 27 ppm at the same stress. For direct comparison, a Co 67Fe 33 single crystal disk, also measured as part of this study, exhibited a magnetostriction value of 131 and 91 microstrain in the [100] and [111] directions, respectively, with a calculated polycrystalline value (λ s) of 107 microstrain. Electron back scattered diffraction (EBSD) has been used to qualitatively link the performance with crystallographic orientation and phase information, showing only the BCC phase in the 76% Co sample, but three different phases (BCC, FCC, and HCP) in the 64% Co sample.« less
Magnesium and cadmium containing Heusler phases REPd2Mg, REPd2Cd, REAg2Mg, REAu2Mg and REAu2Cd
NASA Astrophysics Data System (ADS)
Johnscher, Michael; Stein, Sebastian; Niehaus, Oliver; Benndorf, Christopher; Heletta, Lukas; Kersting, Marcel; Höting, Christoph; Eckert, Hellmut; Pöttgen, Rainer
2016-02-01
Twenty-eight new Heusler phases REPd2Mg, REPd2Cd, REAg2Mg, REAu2Mg and REAu2Cd with different rare earth elements were synthesized from the elements in sealed niobium ampoules in a water-cooled sample chamber of an induction furnace. The samples were characterized by powder X-ray diffraction. The cell volumes show the expected lanthanide contraction. The structures of YPd2Cd, GdPd2Cd, GdAu2Cd, Y1.12Ag2Mg0.88 and GdAg2Mg were refined based on single crystal diffractometer data. The magnetic properties were determined for fifteen phase pure samples. LuAu2Mg is a weak Pauli paramagnet with a susceptibility of 1.0(2) × 10-5 emu mol-1 at room temperature. The remaining samples show stable trivalent rare earth ions and most of them order magnetically at low temperatures. The ferromagnet GdAg2Mg shows the highest ordering temperature of TC = 98.3 K. 113Cd and 89Y MAS NMR spectra of YAu2Cd and YPd2Cd confirm the presence of unique crystallographic sites. The resonances are characterized by large Knight shifts, whose magnitude can be correlated with electronegativity trends.
STT Doubles with Large Delta M - Part VII: Andromeda, Pisces, Auriga
NASA Astrophysics Data System (ADS)
Knapp, Wilfried; Nanson, John
2017-01-01
The results of visual double star observing sessions suggested a pattern for STT doubles with large DM of being harder to resolve than would be expected based on the WDS catalog data. It was felt this might be a problem with expectations on one hand, and on the other might be an indication of a need for new precise measurements, so we decided to take a closer look at a selected sample of STT doubles and do some research. Similar to the other objects covered so far several of the components show parameters quite different from the current WDS data.
Chromium isotopic anomalies in the Allende meteorite
NASA Technical Reports Server (NTRS)
Papanastassiou, D. A.
1986-01-01
Abundances of the chromium isotopes in terrestrial and bulk meteorite samples are identical to 0.01 percent. However, Ca-Al-rich inclusions from the Allende meteorite show endemic isotopic anomalies in chromium which require at least three nucleosynthetic components. Large anomalies at Cr-54 in a special class of inclusions are correlated with large anomalies at Ca-48 and Ti-50 and provide strong support for a component reflecting neutron-rich nucleosynthesis at nuclear statistical equilibrium. This correlation suggests that materials from very near the core of an exploding massive star may be injected into the interstellar medium.
Statistical detection of systematic election irregularities
Klimek, Peter; Yegorov, Yuri; Hanel, Rudolf; Thurner, Stefan
2012-01-01
Democratic societies are built around the principle of free and fair elections, and that each citizen’s vote should count equally. National elections can be regarded as large-scale social experiments, where people are grouped into usually large numbers of electoral districts and vote according to their preferences. The large number of samples implies statistical consequences for the polling results, which can be used to identify election irregularities. Using a suitable data representation, we find that vote distributions of elections with alleged fraud show a kurtosis substantially exceeding the kurtosis of normal elections, depending on the level of data aggregation. As an example, we show that reported irregularities in recent Russian elections are, indeed, well-explained by systematic ballot stuffing. We develop a parametric model quantifying the extent to which fraudulent mechanisms are present. We formulate a parametric test detecting these statistical properties in election results. Remarkably, this technique produces robust outcomes with respect to the resolution of the data and therefore, allows for cross-country comparisons. PMID:23010929
Happy guys finish last: the impact of emotion expressions on sexual attraction.
Tracy, Jessica L; Beall, Alec T
2011-12-01
This research examined the relative sexual attractiveness of individuals showing emotion expressions of happiness, pride, and shame compared with a neutral control. Across two studies using different images and samples ranging broadly in age (total N = 1041), a large gender difference emerged in the sexual attractiveness of happy displays: happiness was the most attractive female emotion expression, and one of the least attractive in males. In contrast, pride showed the reverse pattern; it was the most attractive male expression, and one of the least attractive in women. Shame displays were relatively attractive in both genders, and, among younger adult women viewers, male shame was more attractive than male happiness, and not substantially less than male pride. Effects were largely consistent with evolutionary and socio-cultural-norm accounts. Overall, this research provides the first evidence that distinct emotion expressions have divergent effects on sexual attractiveness, which vary by gender but largely hold across age. (c) 2011 APA, all rights reserved.
Topology of Large-Scale Structure by Galaxy Type: Hydrodynamic Simulations
NASA Astrophysics Data System (ADS)
Gott, J. Richard, III; Cen, Renyue; Ostriker, Jeremiah P.
1996-07-01
The topology of large-scale structure is studied as a function of galaxy type using the genus statistic. In hydrodynamical cosmological cold dark matter simulations, galaxies form on caustic surfaces (Zeldovich pancakes) and then slowly drain onto filaments and clusters. The earliest forming galaxies in the simulations (defined as "ellipticals") are thus seen at the present epoch preferentially in clusters (tending toward a meatball topology), while the latest forming galaxies (defined as "spirals") are seen currently in a spongelike topology. The topology is measured by the genus (number of "doughnut" holes minus number of isolated regions) of the smoothed density-contour surfaces. The measured genus curve for all galaxies as a function of density obeys approximately the theoretical curve expected for random- phase initial conditions, but the early-forming elliptical galaxies show a shift toward a meatball topology relative to the late-forming spirals. Simulations using standard biasing schemes fail to show such an effect. Large observational samples separated by galaxy type could be used to test for this effect.
Statistical detection of systematic election irregularities.
Klimek, Peter; Yegorov, Yuri; Hanel, Rudolf; Thurner, Stefan
2012-10-09
Democratic societies are built around the principle of free and fair elections, and that each citizen's vote should count equally. National elections can be regarded as large-scale social experiments, where people are grouped into usually large numbers of electoral districts and vote according to their preferences. The large number of samples implies statistical consequences for the polling results, which can be used to identify election irregularities. Using a suitable data representation, we find that vote distributions of elections with alleged fraud show a kurtosis substantially exceeding the kurtosis of normal elections, depending on the level of data aggregation. As an example, we show that reported irregularities in recent Russian elections are, indeed, well-explained by systematic ballot stuffing. We develop a parametric model quantifying the extent to which fraudulent mechanisms are present. We formulate a parametric test detecting these statistical properties in election results. Remarkably, this technique produces robust outcomes with respect to the resolution of the data and therefore, allows for cross-country comparisons.
Covalent modification and exfoliation of graphene oxide using ferrocene
NASA Astrophysics Data System (ADS)
Avinash, M. B.; Subrahmanyam, K. S.; Sundarayya, Y.; Govindaraju, T.
2010-09-01
Large scale preparation of single-layer graphene and graphene oxide is of great importance due to their potential applications. We report a simple room temperature method for the exfoliation of graphene oxide using covalent modification of graphene oxide with ferrocene to obtain single-layer graphene oxide sheets. The samples were characterized by FESEM, HRTEM, AFM, EDAX, FT-IR, Raman and Mössbauer spectroscopic studies. HRTEM micrograph of the covalently modified graphene oxide showed increased interlayer spacing of ~2.4 nm due to ferrocene intercalation. The presence of single-layer graphene oxide sheets were confirmed by AFM studies. The covalently modified ferrocene-graphene oxide composite showed interesting magnetic behavior.Large scale preparation of single-layer graphene and graphene oxide is of great importance due to their potential applications. We report a simple room temperature method for the exfoliation of graphene oxide using covalent modification of graphene oxide with ferrocene to obtain single-layer graphene oxide sheets. The samples were characterized by FESEM, HRTEM, AFM, EDAX, FT-IR, Raman and Mössbauer spectroscopic studies. HRTEM micrograph of the covalently modified graphene oxide showed increased interlayer spacing of ~2.4 nm due to ferrocene intercalation. The presence of single-layer graphene oxide sheets were confirmed by AFM studies. The covalently modified ferrocene-graphene oxide composite showed interesting magnetic behavior. Electronic supplementary information (ESI) available: Magnetic data; AFM images; TEM micrographs; and Mössbauer spectroscopic data. See DOI: 10.1039/c0nr00024h
The Disease Burden of Childhood Adversities in Adults: A Population-Based Study
ERIC Educational Resources Information Center
Cuijpers, Pim; Smit, Filip; Unger, Froukje; Stikkelbroek, Yvonne; ten Have, Margreet; de Graaf, Ron
2011-01-01
Objectives: There is much evidence showing that childhood adversities have considerable effects on the mental and physical health of adults. It could be assumed therefore, that the disease burden of childhood adversities is high. It has not yet been examined, however, whether this is true. Method: We used data of a large representative sample (N =…
ERIC Educational Resources Information Center
Hodis, Flaviu A.; Tait, Carolyn; Hodis, Georgeta M.; Hodis, Monica A.; Scornavacca, Eusebio
2016-01-01
This research investigated the interrelations among achievement goals and the underlying reasons for pursuing them. To do so, it utilized the framework of goal complexes, which are regulatory constructs defined at the intersection of aims and reasons. Data from two independent large samples of New Zealand university students showed that across…
ERIC Educational Resources Information Center
Wang, Chuang; Kim, Do-Hong; Bong, Mimi; Ahn, Hyun Seon
2013-01-01
This study provides evidence for the validity of the Questionnaire of English Self-Efficacy in a sample of 167 college students in Korea. Results show that the scale measures largely satisfy the Rasch model for unidimensionality. The rating scale appeared to function effectively. The item hierarchy was consistent with the expected item order. The…
ERIC Educational Resources Information Center
Mashava, Rumbidzai; Chingombe, Agrippa
2013-01-01
Teaching Practice is presumed to be key to professionalization of teachers, although very little research has been done on its effectiveness. This article seeks to show the views of stakeholders on the effectiveness of Teaching Practice in Zimbabwean primary schools. A case study which is largely qualitative was found appropriate. A sample of 84…
A simple and rapid microplate assay for glycoprotein-processing glycosidases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, M.S.; Zwolshen, J.H.; Harry, B.S.
1989-08-15
A simple and convenient microplate assay for glycosidases involved in the glycoprotein-processing reactions is described. The assay is based on specific binding of high-mannose-type oligosaccharide substrates to concanavalin A-Sepharose, while monosaccharides liberated by enzymatic hydrolysis do not bind to concanavalin A-Sepharose. By the use of radiolabeled substrates (( 3H)glucose for glucosidases and (3H)mannose for mannosidases), the radioactivity in the liberated monosaccharides can be determined as a measure of the enzymatic activity. This principle was employed earlier for developing assays for glycosidases previously reported. These authors have reported the separation of substrate from the product by concanavalin A-Sepharose column chromatography. Thismore » procedure is handicapped by the fact that it cannot be used for a large number of samples and is time consuming. We have simplified this procedure and adapted it to the use of a microplate (96-well plate). This would help in processing a large number of samples in a short time. In this report we show that the assay is comparable to the column assay previously reported. It is linear with time and enzyme concentration and shows expected kinetics with castanospermine, a known inhibitor of alpha-glucosidase I.« less
New U-series dates at the Caune de l'Arago, France
Falgueres, Christophe; Yokoyama, Y.; Shen, G.; Bischoff, J.L.; Ku, T.-L.; de Lumley, Henry
2004-01-01
In the beginning of the 1980s, the Caune de l'Arago was the focus of an interdisciplinary effort to establish the chronology of the Homo heidelbergensis (Preneandertals) fossils using a variety of techniques on bones and on speleothems. The result was a very large spread of dates particularly on bone samples. Amid the large spread of results, some radiometric data on speleothems showed a convergence in agreement with inferences from faunal studies. We present new U-series results on the stalagmitic formation located at the bottom of Unit IV (at the base of the Upper Stratigraphic Complex). Samples and splits were collaboratively analyzed in the four different laboratories with excellent interlaboratory agreement. Results show the complex sequence of this stalagmitic formation. The most ancient part is systematically at internal isotopic equilibrium (>350 ka) suggesting growth during or before isotopic stage 9, representing a minimum age for the human remains found in Unit III of the Middle Stratigraphical Complex which is stratigraphically under the basis of the studied stalagmitic formation. Overlaying parts of the speleothem date to the beginning of marine isotope stages 7 and 5. ?? 2003 Elsevier Science Ltd. All rights reserved.
Proton irradiation of CVD diamond detectors for high-luminosity experiments at the LHC
NASA Astrophysics Data System (ADS)
Meier, D.; Adam, W.; Bauer, C.; Berdermann, E.; Bergonzo, P.; Bogani, F.; Borchi, E.; Bruzzi, M.; Colledani, C.; Conway, J.; Dabrowski, W.; Delpierre, P.; Deneuville, A.; Dulinski, W.; van Eijk, B.; Fallou, A.; Foulon, F.; Friedl, M.; Jany, C.; Gan, K. K.; Gheeraert, E.; Grigoriev, E.; Hallewell, G.; Hall-Wilton, R.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kania, D.; Kaplon, J.; Kass, R.; Knöpfle, K. T.; Krammer, M.; Manfredi, P. F.; Marshall, R. D.; Mishina, M.; Le Normand, F.; Pan, L. S.; Palmieri, V. G.; Pernegger, H.; Pernicka, M.; Peitz, A.; Pirollo, S.; Pretzl, K.; Re, V.; Riester, J. L.; Roe, S.; Roff, D.; Rudge, A.; Schnetzer, S.; Sciortino, S.; Speziali, V.; Stelzer, H.; Stone, R.; Tapper, R. J.; Tesarek, R.; Thomson, G. B.; Trawick, M.; Trischuk, W.; Turchetta, R.; Walsh, A. M.; Wedenig, R.; Weilhammer, P.; Ziock, H.; Zoeller, M.; RD42 Collaboration
1999-04-01
CVD diamond shows promising properties for use as a position-sensitive detector for experiments in the highest radiation areas at the Large Hadron Collider. In order to study the radiation hardness of diamond we exposed CVD diamond detector samples to 24 Gev/ c and 500 Mev protons up to a fluence of 5×10 15 p/cm 2. We measured the charge collection distance, the average distance electron-hole pairs move apart in an external electric field, and leakage currents before, during, and after irradiation. The charge collection distance remains unchanged up to 1×10 15 p/cm 2 and decreases by ≈40% at 5×10 15 p/cm 2. Leakage currents of diamond samples were below 1 pA before and after irradiation. The particle-induced currents during irradiation correlate well with the proton flux. In contrast to diamond, a silicon diode, which was irradiated for comparison, shows the known large increase in leakage current. We conclude that CVD diamond detectors are radiation hard to 24 GeV/ c and 500 MeV protons up to at least 1×10 15p/cm 2 without signal loss.
New U-series dates at the Caune de l'Arago, France
Falgueres, Christophe; Yokoyama, Y.; Shen, G.; Bischoff, J.L.; Ku, T.-L.; de Lumley, Henry
2004-01-01
In the beginning of the 1980s, the Caune de l'Arago was the focus of an interdisciplinary effort to establish the chronology of the Homo heidelbergensis (Preneandertals) fossils using a variety of techniques on bones and on speleothems. The result was a very large spread of dates particularly on bone samples. Amid the large spread of results, some radiometric data on speleothems showed a convergence in agreement with inferences from faunal studies. We present new U-series results on the stalagmitic formation located at the bottom of Unit IV (at the base of the Upper Stratigraphic Complex). Samples and splits were collaboratively analyzed in the four different laboratories with excellent interlaboratory agreement. Results show the complex sequence of this stalagmitic formation. The most ancient part is systematically at internal isotopic equilibrium (>350 ka) suggesting growth during or before isotopic stage 9, representing a minimum age for the human remains found in Unit III of the Middle Stratigraphical Complex which is stratigraphically under the basis of the studied stalagmitic formation. Overlaying parts of the speleothem date to the beginning of marine isotope stages 7 and 5. ?? 2003 Elsevier Ltd. All rights reserved.
A New Freezing Method Using Pre-Dehydration by Microwave-Vacuum Drying
NASA Astrophysics Data System (ADS)
Tsuruta, Takaharu; Hamidi, Nurkholis
Partial dehydration by microwave-vacuum drying has been applied to tuna and strawberry in order to reduce cell-damages caused by the formation of large ice-crystals during freezing. The samples were subjected to microwave vacuum drying at pressure of 5 kPa and temperature less than 27°C to remove small amount of water prior to freezing. The tuna were cooled by using the freezing chamber at temperature -50°C or -150°C, while the strawberries were frozen at temperature -30°C or -80°C, respectively. The temperature transients in tuna showed that removing some water before freezing made the freezing time shorter. The observations of ice crystal clearly indicated that rapid cooling and pre-dehydration prior to freezing were effective in minimizing the size of ice crystal. It is also understood that the formation of large ice crystals has a close relation to the cell damages. After thawing, the observation of microstructure was done on the tuna and strawberry halves. The pre-dehydrated samples showed a better structure than the un-dehydrated one. It is concluded that the pre-dehydration by microwave-vacuum drying is one promising method for the cryo-preservation of foods.
Attack Detection in Sensor Network Target Localization Systems With Quantized Data
NASA Astrophysics Data System (ADS)
Zhang, Jiangfan; Wang, Xiaodong; Blum, Rick S.; Kaplan, Lance M.
2018-04-01
We consider a sensor network focused on target localization, where sensors measure the signal strength emitted from the target. Each measurement is quantized to one bit and sent to the fusion center. A general attack is considered at some sensors that attempts to cause the fusion center to produce an inaccurate estimation of the target location with a large mean-square-error. The attack is a combination of man-in-the-middle, hacking, and spoofing attacks that can effectively change both signals going into and coming out of the sensor nodes in a realistic manner. We show that the essential effect of attacks is to alter the estimated distance between the target and each attacked sensor to a different extent, giving rise to a geometric inconsistency among the attacked and unattacked sensors. Hence, with the help of two secure sensors, a class of detectors are proposed to detect the attacked sensors by scrutinizing the existence of the geometric inconsistency. We show that the false alarm and miss probabilities of the proposed detectors decrease exponentially as the number of measurement samples increases, which implies that for sufficiently large number of samples, the proposed detectors can identify the attacked and unattacked sensors with any required accuracy.
Becker, Carol J.; Smith, S. Jerrod; Greer, James R.; Smith, Kevin A.
2010-01-01
The U.S. Geological Survey well profiler was used to describe arsenic-related water quality with well depth and identify zones yielding water with high arsenic concentrations in two production wells in central and western Oklahoma that yield water from the Permian-aged Garber-Wellington and Rush Springs aquifers, respectively. In addition, well-head samples were collected from 12 production wells yielding water with historically large concentrations of arsenic (greater than 10 micrograms per liter) from the Garber-Wellington aquifer, Rush Springs aquifer, and two minor aquifers: the Arbuckle-Timbered Hills aquifer in southern Oklahoma and a Permian-aged undefined aquifer in north-central Oklahoma. Three depth-dependent samples from a production well in the Rush Springs aquifer had similar water-quality characteristics to the well-head sample and did not show any substantial changes with depth. However, slightly larger arsenic concentrations in the two deepest depth-dependent samples indicate the zones yielding noncompliant arsenic concentrations are below the shallowest sampled depth. Five depth-dependent samples from a production well in the Garber-Wellington aquifer showed increases in arsenic concentrations with depth. Well-bore travel-time information and water-quality data from depth-dependent and well-head samples showed that most arsenic contaminated water (about 63 percent) was entering the borehole from perforations adjacent to or below the shroud that overlaid the pump. Arsenic concentrations ranged from 10.4 to 124 micrograms per liter in 11 of the 12 production wells sampled at the well head, exceeding the maximum contaminant level of 10 micrograms per liter for drinking water. pH values of the 12 well-head samples ranged from 6.9 to 9. Seven production wells in the Garber-Wellington aquifer had the largest arsenic concentrations ranging from 18.5 to 124 micrograms per liter. Large arsenic concentrations (10.4-18.5) and near neutral to slightly alkaline pH values (6.9-7.4) were detected in samples from one well in the Garber-Wellington aquifer, three production wells in the Rush Springs aquifer, and one well in an undefined Permian-aged aquifer. All well-head samples were oxic and arsenate was the only species of arsenic in water from 10 of the 12 production wells sampled. Arsenite was measured above the laboratory reporting level in water from a production well in the Garber-Wellington aquifer and was the only arsenic species measured in water from the Arbuckle-Timbered Hills aquifer. Fluoride and uranium were the only trace elements, other than arsenic, that exceeded the maximum contaminant level for drinking water in well-head samples collected for the study. Uranium concentrations in four production wells in the Garber-Wellington aquifer ranged from 30.2 to 99 micrograms per liter exceeding the maximum contaminant level of 30 micrograms per liter for drinking water. Water from these four wells also had the largest arsenic concentrations measured in the study ranging from 30 to 124 micrograms
NASA Technical Reports Server (NTRS)
Johnson, H. R.; Krupp, B. M.
1975-01-01
An opacity sampling (OS) technique for treating the radiative opacity of large numbers of atomic and molecular lines in cool stellar atmospheres is presented. Tests were conducted and results show that the structure of atmospheric models is accurately fixed by the use of 1000 frequency points, and 500 frequency points is often adequate. The effects of atomic and molecular lines are separately studied. A test model computed by using the OS method agrees very well with a model having identical atmospheric parameters computed by the giant line (opacity distribution function) method.
Combined optical coherence tomography and hyper-spectral imaging using a double clad fiber coupler
NASA Astrophysics Data System (ADS)
Guay-Lord, Robin; Lurie, Kristen L.; Attendu, Xavier; Mageau, Lucas; Godbout, Nicolas; Ellerbee Bowden, Audrey K.; Strupler, Mathias; Boudoux, Caroline
2016-03-01
This proceedings shows the combination of Optical Coherence Tomography (OCT) and Hyper-Spectral Imaging (HSI) using a double-clad optical fiber. The single mode core of the fiber is used to transmit OCT signals, while the cladding, with its large collection area, provides an efficient way to capture the reflectance spectrum of the sample. The combination of both methods enables three-dimensional acquisition of sample morphology with OCT, enhanced by the molecular information contained in its hyper-spectral image. We believe that the combination of these techniques could result in endoscopes with enhanced tissue identification capability.
Characterization of Large Structural Genetic Mosaicism in Human Autosomes
Machiela, Mitchell J.; Zhou, Weiyin; Sampson, Joshua N.; Dean, Michael C.; Jacobs, Kevin B.; Black, Amanda; Brinton, Louise A.; Chang, I-Shou; Chen, Chu; Chen, Constance; Chen, Kexin; Cook, Linda S.; Crous Bou, Marta; De Vivo, Immaculata; Doherty, Jennifer; Friedenreich, Christine M.; Gaudet, Mia M.; Haiman, Christopher A.; Hankinson, Susan E.; Hartge, Patricia; Henderson, Brian E.; Hong, Yun-Chul; Hosgood, H. Dean; Hsiung, Chao A.; Hu, Wei; Hunter, David J.; Jessop, Lea; Kim, Hee Nam; Kim, Yeul Hong; Kim, Young Tae; Klein, Robert; Kraft, Peter; Lan, Qing; Lin, Dongxin; Liu, Jianjun; Le Marchand, Loic; Liang, Xiaolin; Lissowska, Jolanta; Lu, Lingeng; Magliocco, Anthony M.; Matsuo, Keitaro; Olson, Sara H.; Orlow, Irene; Park, Jae Yong; Pooler, Loreall; Prescott, Jennifer; Rastogi, Radhai; Risch, Harvey A.; Schumacher, Fredrick; Seow, Adeline; Setiawan, Veronica Wendy; Shen, Hongbing; Sheng, Xin; Shin, Min-Ho; Shu, Xiao-Ou; VanDen Berg, David; Wang, Jiu-Cun; Wentzensen, Nicolas; Wong, Maria Pik; Wu, Chen; Wu, Tangchun; Wu, Yi-Long; Xia, Lucy; Yang, Hannah P.; Yang, Pan-Chyr; Zheng, Wei; Zhou, Baosen; Abnet, Christian C.; Albanes, Demetrius; Aldrich, Melinda C.; Amos, Christopher; Amundadottir, Laufey T.; Berndt, Sonja I.; Blot, William J.; Bock, Cathryn H.; Bracci, Paige M.; Burdett, Laurie; Buring, Julie E.; Butler, Mary A.; Carreón, Tania; Chatterjee, Nilanjan; Chung, Charles C.; Cook, Michael B.; Cullen, Michael; Davis, Faith G.; Ding, Ti; Duell, Eric J.; Epstein, Caroline G.; Fan, Jin-Hu; Figueroa, Jonine D.; Fraumeni, Joseph F.; Freedman, Neal D.; Fuchs, Charles S.; Gao, Yu-Tang; Gapstur, Susan M.; Patiño-Garcia, Ana; Garcia-Closas, Montserrat; Gaziano, J. Michael; Giles, Graham G.; Gillanders, Elizabeth M.; Giovannucci, Edward L.; Goldin, Lynn; Goldstein, Alisa M.; Greene, Mark H.; Hallmans, Goran; Harris, Curtis C.; Henriksson, Roger; Holly, Elizabeth A.; Hoover, Robert N.; Hu, Nan; Hutchinson, Amy; Jenab, Mazda; Johansen, Christoffer; Khaw, Kay-Tee; Koh, Woon-Puay; Kolonel, Laurence N.; Kooperberg, Charles; Krogh, Vittorio; Kurtz, Robert C.; LaCroix, Andrea; Landgren, Annelie; Landi, Maria Teresa; Li, Donghui; Liao, Linda M.; Malats, Nuria; McGlynn, Katherine A.; McNeill, Lorna H.; McWilliams, Robert R.; Melin, Beatrice S.; Mirabello, Lisa; Peplonska, Beata; Peters, Ulrike; Petersen, Gloria M.; Prokunina-Olsson, Ludmila; Purdue, Mark; Qiao, You-Lin; Rabe, Kari G.; Rajaraman, Preetha; Real, Francisco X.; Riboli, Elio; Rodríguez-Santiago, Benjamín; Rothman, Nathaniel; Ruder, Avima M.; Savage, Sharon A.; Schwartz, Ann G.; Schwartz, Kendra L.; Sesso, Howard D.; Severi, Gianluca; Silverman, Debra T.; Spitz, Margaret R.; Stevens, Victoria L.; Stolzenberg-Solomon, Rachael; Stram, Daniel; Tang, Ze-Zhong; Taylor, Philip R.; Teras, Lauren R.; Tobias, Geoffrey S.; Viswanathan, Kala; Wacholder, Sholom; Wang, Zhaoming; Weinstein, Stephanie J.; Wheeler, William; White, Emily; Wiencke, John K.; Wolpin, Brian M.; Wu, Xifeng; Wunder, Jay S.; Yu, Kai; Zanetti, Krista A.; Zeleniuch-Jacquotte, Anne; Ziegler, Regina G.; de Andrade, Mariza; Barnes, Kathleen C.; Beaty, Terri H.; Bierut, Laura J.; Desch, Karl C.; Doheny, Kimberly F.; Feenstra, Bjarke; Ginsburg, David; Heit, John A.; Kang, Jae H.; Laurie, Cecilia A.; Li, Jun Z.; Lowe, William L.; Marazita, Mary L.; Melbye, Mads; Mirel, Daniel B.; Murray, Jeffrey C.; Nelson, Sarah C.; Pasquale, Louis R.; Rice, Kenneth; Wiggs, Janey L.; Wise, Anastasia; Tucker, Margaret; Pérez-Jurado, Luis A.; Laurie, Cathy C.; Caporaso, Neil E.; Yeager, Meredith; Chanock, Stephen J.
2015-01-01
Analyses of genome-wide association study (GWAS) data have revealed that detectable genetic mosaicism involving large (>2 Mb) structural autosomal alterations occurs in a fraction of individuals. We present results for a set of 24,849 genotyped individuals (total GWAS set II [TGSII]) in whom 341 large autosomal abnormalities were observed in 168 (0.68%) individuals. Merging data from the new TGSII set with data from two prior reports (the Gene-Environment Association Studies and the total GWAS set I) generated a large dataset of 127,179 individuals; we then conducted a meta-analysis to investigate the patterns of detectable autosomal mosaicism (n = 1,315 events in 925 [0.73%] individuals). Restricting to events >2 Mb in size, we observed an increase in event frequency as event size decreased. The combined results underscore that the rate of detectable mosaicism increases with age (p value = 5.5 × 10−31) and is higher in men (p value = 0.002) but lower in participants of African ancestry (p value = 0.003). In a subset of 47 individuals from whom serial samples were collected up to 6 years apart, complex changes were noted over time and showed an overall increase in the proportion of mosaic cells as age increased. Our large combined sample allowed for a unique ability to characterize detectable genetic mosaicism involving large structural events and strengthens the emerging evidence of non-random erosion of the genome in the aging population. PMID:25748358
Stability of selected serum hormones and lipids after long-term storage in the Janus Serum Bank.
Gislefoss, Randi E; Grimsrud, Tom K; Mørkrid, Lars
2015-04-01
The potential value of a biobank depends on the quality of the samples, i.e. how well they reflect the biological or biochemical state of the donors at the time of sampling. Documentation of sample quality has become a particularly important issue for researchers and users of biobank studies. The aim of this study was to investigate the long-term stability of selected components: cholesterol, high density cholesterol (HDLC), low density cholesterol (LDLC), apolipoprotein A1 (apo-A1), apolipoprotein B (apo B), follicle stimulating hormone (FSH), luteinizing hormone (LH), prolactin (PRL), thyroid stimulating hormone (TSH) and free thyroxin (FT4). Samples, stored at -25°C, from 520 men aged 40-49 years at blood sampling distributed in equally sized groups (n=130) according to length of storage, 0, 4, 17 and 29 years, respectively, were used in a cross sectional design. The freshly collected serum samples were used as a reference group to calculate storage related changes. The differences between fresh samples and samples stored for 29 years were substantial for apo-A1 (+12%), apo-B (+22.3%), HDLC (-69.2%), LDLC (+31.3%), and PRL (-33.5%), while total cholesterol, FSH, LH, TSH and FT4 did not show any significant difference. The study showed large differences in serum level of the selected components. The lipids and apolipoproteins were all changed except for total cholesterol. Most hormones investigated (FSH, LH, TSH and FT4) proved to be stable after 29 years of storage while PRL showed sign of degradation. The observed differences are probably due to long-term storage effects and/or external factors (i.e. diet and smoking). Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Photometric properties of Mars soils analogs
Pommerol, A.; Thomas, N.; Jost, B.; Beck, P.; Okubo, C.; McEwen, A.S.
2013-01-01
We have measured the bidirectional reflectance of analogs of dry, wet, and frozen Martian soils over a wide range of phase angles in the visible spectral range. All samples were produced from two geologic samples: the standard JSC Mars-1 soil simulant and Hawaiian basaltic sand. In a first step, experiments were conducted with the dry samples to investigate the effects of surface texture. Comparisons with results independently obtained by different teams with similar samples showed a satisfying reproducibility of the photometric measurements as well as a noticeable influence of surface textures resulting from different sample preparation procedures. In a second step, water was introduced to produce wet and frozen samples and their photometry investigated. Optical microscope images of the samples provided information about their microtexture. Liquid water, even in relatively low amount, resulted in the disappearance of the backscattering peak and the appearance of a forward-scattering peak whose intensity increases with the amount of water. Specular reflections only appeared when water was present in an amount large enough to allow water to form a film at the surface of the sample. Icy samples showed a wide variability of photometric properties depending on the physical properties of the water ice. We discuss the implications of these measurements in terms of the expected photometric behavior of the Martian surface, from equatorial to circum-polar regions. In particular, we propose some simple photometric criteria to improve the identification of wet and/or icy soils from multiple observations under different geometries.
An item response theory evaluation of three depression assessment instruments in a clinical sample.
Adler, Mats; Hetta, Jerker; Isacsson, Göran; Brodin, Ulf
2012-06-21
This study investigates whether an analysis, based on Item Response Theory (IRT), can be used for initial evaluations of depression assessment instruments in a limited patient sample from an affective disorder outpatient clinic, with the aim to finding major advantages and deficiencies of the instruments. Three depression assessment instruments, the depression module from the Patient Health Questionnaire (PHQ9), the depression subscale of Affective Self Rating Scale (AS-18-D) and the Montgomery-Åsberg Depression Rating Scale (MADRS) were evaluated in a sample of 61 patients with affective disorder diagnoses, mainly bipolar disorder. A '3- step IRT strategy' was used. In a first step, the Mokken non-parametric analysis showed that PHQ9 and AS-18-D had strong overall scalabilities of 0.510 [C.I. 0.42, 0.61] and 0,513 [C.I. 0.41, 0.63] respectively, while MADRS had a weak scalability of 0.339 [C.I. 0.25, 0.43]. In a second step, a Rasch model analysis indicated large differences concerning the item discriminating capacity and was therefore considered not suitable for the data. In third step, applying a more flexible two parameter model, all three instruments showed large differences in item information and items had a low capacity to reliably measure respondents at low levels of depression severity. We conclude that a stepwise IRT-approach, as performed in this study, is a suitable tool for studying assessment instruments at early stages of development. Such an analysis can give useful information, even in small samples, in order to construct more precise measurements or to evaluate existing assessment instruments. The study suggests that the PHQ9 and AS-18-D can be useful for measurement of depression severity in an outpatient clinic for affective disorder, while the MADRS shows weak measurement properties for this type of patients.
Coarsening in Solid-Liquid Mixtures Studied on the Space Shuttle
NASA Technical Reports Server (NTRS)
Caruso, John J.
1999-01-01
Ostwald ripening, or coarsening, is a process in which large particles in a two-phase mixture grow at the expense of small particles. It is a ubiquitous natural phenomena occurring in the late stages of virtually all phase separation processes. In addition, a large number of commercially important alloys undergo coarsening because they are composed of particles embedded in a matrix. Many of them, such as high-temperature superalloys used for turbine blade materials and low-temperature aluminum alloys, coarsen in the solid state. In addition, many alloys, such as the tungsten-heavy metal systems, coarsen in the solid-liquid state during liquid phase sintering. Numerous theories have been proposed that predict the rate at which the coarsening process occurs and the shape of the particle size distribution. Unfortunately, these theories have never been tested using a system that satisfies all the assumptions of the theory. In an effort to test these theories, NASA studied the coarsening process in a solid-liquid mixture composed of solid tin particles in a liquid lead-tin matrix. On Earth, the solid tin particles float to the surface of the sample, like ice in water. In contrast, in a microgravity environment this does not occur. The microstructures in the ground- and space-processed samples (see the photos) show clearly the effects of gravity on the coarsening process. The STS-83-processed sample (right image) shows nearly spherical uniformly dispersed solid tin particles. In contrast, the identically processed, ground-based sample (left image) shows significant density-driven, nonspherical particles, and because of the higher effective solid volume fraction, a larger particle size after the same coarsening time. The "Coarsening in Solid-Liquid Mixtures" (CSLM) experiment was conducted in the Middeck Glovebox facility (MGBX) flown aboard the shuttle in the Microgravity Science Laboratory (MSL-1/1R) on STS-83/94. The primary objective of CSLM is to measure the temporal evolution of the solid particles during coarsening.
Environmental DNA from Seawater Samples Correlate with Trawl Catches of Subarctic, Deepwater Fishes.
Thomsen, Philip Francis; Møller, Peter Rask; Sigsgaard, Eva Egelyng; Knudsen, Steen Wilhelm; Jørgensen, Ole Ankjær; Willerslev, Eske
2016-01-01
Remote polar and deepwater fish faunas are under pressure from ongoing climate change and increasing fishing effort. However, these fish communities are difficult to monitor for logistic and financial reasons. Currently, monitoring of marine fishes largely relies on invasive techniques such as bottom trawling, and on official reporting of global catches, which can be unreliable. Thus, there is need for alternative and non-invasive techniques for qualitative and quantitative oceanic fish surveys. Here we report environmental DNA (eDNA) metabarcoding of seawater samples from continental slope depths in Southwest Greenland. We collected seawater samples at depths of 188-918 m and compared seawater eDNA to catch data from trawling. We used Illumina sequencing of PCR products to demonstrate that eDNA reads show equivalence to fishing catch data obtained from trawling. Twenty-six families were found with both trawling and eDNA, while three families were found only with eDNA and two families were found only with trawling. Key commercial fish species for Greenland were the most abundant species in both eDNA reads and biomass catch, and interpolation of eDNA abundances between sampling sites showed good correspondence with catch sizes. Environmental DNA sequence reads from the fish assemblages correlated with biomass and abundance data obtained from trawling. Interestingly, the Greenland shark (Somniosus microcephalus) showed high abundance of eDNA reads despite only a single specimen being caught, demonstrating the relevance of the eDNA approach for large species that can probably avoid bottom trawls in most cases. Quantitative detection of marine fish using eDNA remains to be tested further to ascertain whether this technique is able to yield credible results for routine application in fisheries. Nevertheless, our study demonstrates that eDNA reads can be used as a qualitative and quantitative proxy for marine fish assemblages in deepwater oceanic habitats. This relates directly to applied fisheries as well as to monitoring effects of ongoing climate change on marine biodiversity-especially in polar ecosystems.
NASA Astrophysics Data System (ADS)
Rúa, Armando; Fernández, Félix E.; Hines, Melissa A.; Sepúlveda, Nelson
2010-03-01
Vanadium dioxide (VO2) thin films were grown on silicon microcantilevers and companion test substrates by pulsed laser deposition followed by in situ annealing in an oxidizing atmosphere, with annealing times used to control crystallite sizes. Annealing times of 18 min produced VO2 films with average crystallite sizes of ˜10 nm or less, while those annealed for 35 min had crystallites of average size ˜90 nm, comparable to sample thickness. X-ray diffraction and x-ray photoelectron spectroscopy studies of the samples showed that films with crystallite sizes ˜40 nm or greater consisted of substoichiometric VO2 in its monoclinic phase, with preferential orientation with (011) planes parallel to the sample surface, while finer structured samples had a substantially similar composition, but showed no clear evidence of preferential orientation and were probably partially amorphous. Forced vibration experiments were performed with the cantilevers as they were thermally cycled through the VO2 insulator-to-metal transition (IMT). Very large reversible changes in the resonant frequencies of up to 5% (3.6 kHz) as well as hysteretic behavior were observed, which depend strongly on film crystallite size. The average value of Young's modulus for VO2 films with crystallite sizes of ˜90 nm was estimated from the mechanical resonance data at room temperature to be ˜120 GPa, but the large tensile stresses which develop between film and substrate through the IMT impede a similar determination for the VO2 tetragonal phase, since the commonly used relationships for cantilever frequencies derived from elasticity theory are not applicable for strongly curved composite beams. The results presented show that VO2 thin films can be useful in novel microscale and nanoscale electromechanical resonators in which effective stiffness can be tuned thermally or optically. This response can provide additional functionality to VO2—based devices which take advantage of other property changes through the IMT.
2013-01-01
Background Stereotypic behaviours, i.e. repetitive behaviours induced by frustration, repeated attempts to cope and/or brain dysfunction, are intriguing as they occur in a variety of domestic and captive species without any clear adaptive function. Among the different hypotheses, the coping hypothesis predicts that stereotypic behaviours provide a way for animals in unfavourable environmental conditions to adjust. As such, they are expected to have a lower physiological stress level (glucocorticoids) than non-stereotypic animals. Attempts to link stereotypic behaviours with glucocorticoids however have yielded contradictory results. Here we investigated correlates of oral and motor stereotypic behaviours and glucocorticoid levels in two large samples of domestic horses (NStudy1 = 55, NStudy2 = 58), kept in sub-optimal conditions (e.g. confinement, social isolation), and already known to experience poor welfare states. Each horse was observed in its box using focal sampling (study 1) and instantaneous scan sampling (study 2). Plasma samples (collected in study 1) but also non-invasive faecal samples (collected in both studies) were retrieved in order to assess cortisol levels. Results Results showed that 1) plasma cortisol and faecal cortisol metabolites concentrations did not differ between horses displaying stereotypic behaviours and non-stereotypic horses and 2) both oral and motor stereotypic behaviour levels did not predict plasma cortisol or faecal cortisol metabolites concentrations. Conclusions Cortisol measures, collected in two large samples of horses using both plasma sampling as well as faecal sampling (the latter method minimizing bias due to a non-invasive sampling procedure), therefore do not indicate that stereotypic horses cope better, at least in terms of adrenocortical activity. PMID:23289406
Enrichment of Thorium (Th) and Lead (Pb) in the early Galaxy
NASA Astrophysics Data System (ADS)
Aoki, Wako; Honda, Satoshi
2010-03-01
We have been determining abundances of Th, Pb and other neutron-capture elements in metal-deficient cool giant stars to constrain the enrichment of heavy elements by the r- and s-processes. Our current sample covers the metallicity range between [Fe/H] = -2.5 and -1.0. (1) The abundance ratios of Pb/Fe and Pb/Eu of most of our stars are approximately constant, and no increase of these ratios with increasing metallicity is found. This result suggests that the Pb abundances of our sample are determined by the r-process with no or little contribution of the s-process. (2) The Th/Eu abundance ratios of our sample show no significant scatter, and the average is lower by 0.2 dex in the logarithmic scale than the solar-system value. This result indicates that the actinides production by the r-process does not show large dispersion, even though r-process models suggest high sensitivity of the actinides production to the nucleosynthesis environment.
Optical properties of titanium-di-oxide (TiO2) prepared by hydrothermal method
NASA Astrophysics Data System (ADS)
Rahman, Kazi Hasibur; Biswas, Sayari; Kar, Asit Kumar
2018-05-01
Research on titanate and its derived TiO2 nanostructures with large specific surface area have received great attention due to their enhanced efficiency in photocatalysis, DSSC etc. Here, in this communication TiO2 powder has been prepared by hydrothermal method at 180 °C. In this work we have shown the changes in optical properties of the powder with two different sintering temperatures ‒ 500 °C and 800 °C. The as prepared powder was also studied. FESEM images show spherical particles for the as prepared samples which look more like agglomeration after sintering. Band gaps of the prepared samples were calculated from UV-Vis spectroscopy which lies in the range 2.85 eV ‒ 3.13 eV. The photoluminescence (PL) spectra of the prepared samples were recorded at room temperature in the range of 300‒700 nm. It shows two distinct peaks at 412 nm and 425 nm.
Schulz, R; Newsom, J; Mittelmark, M; Burton, L; Hirsch, C; Jackson, S
1997-01-01
We propose that two related sources of variability in studies of caregiving health effects contribute to an inconsistent pattern of findings: the sampling strategy used and the definition of what constitutes caregiving. Samples are often recruited through self-referral and are typically comprised of caregivers experiencing considerable distress. In this study, we examine the health effects of caregiving in large population-based samples of spousal caregivers and controls using a wide array of objective and self-report physical and mental health outcome measures. By applying different definitions of caregiving, we show that the magnitude of health effects attributable to caregiving can vary substantially, with the largest negative health effects observed among caregivers who characterize themselves as being strained. From an epidemiological perspective, our data show that approximately 80% of persons living with a spouse with a disability provide care to their spouse, but only half of care providers report mental or physical strain associated with caregiving.
Elasticity of microscale volumes of viscoelastic soft matter by cavitation rheometry
NASA Astrophysics Data System (ADS)
Pavlovsky, Leonid; Ganesan, Mahesh; Younger, John G.; Solomon, Michael J.
2014-09-01
Measurement of the elastic modulus of soft, viscoelastic liquids with cavitation rheometry is demonstrated for specimens as small as 1 μl by application of elasticity theory and experiments on semi-dilute polymer solutions. Cavitation rheometry is the extraction of the elastic modulus of a material, E, by measuring the pressure necessary to create a cavity within it [J. A. Zimberlin, N. Sanabria-DeLong, G. N. Tew, and A. J. Crosby, Soft Matter 3, 763-767 (2007)]. This paper extends cavitation rheometry in three ways. First, we show that viscoelastic samples can be approximated with the neo-Hookean model provided that the time scale of the cavity formation is measured. Second, we extend the cavitation rheometry method to accommodate cases in which the sample size is no longer large relative to the cavity dimension. Finally, we implement cavitation rheometry to show that the theory accurately measures the elastic modulus of viscoelastic samples with volumes ranging from 4 ml to as low as 1 μl.
The distribution of galaxies within the 'Great Wall'
NASA Technical Reports Server (NTRS)
Ramella, Massimo; Geller, Margaret J.; Huchra, John P.
1992-01-01
The galaxy distribution within the 'Great Wall', the most striking feature in the first three 'slices' of the CfA redshift survey extension is examined. The Great Wall is extracted from the sample and is analyzed by counting galaxies in cells. The 'local' two-point correlation function within the Great Wall is computed and the local correlation length, is estimated 15/h Mpc, about 3 times larger than the correlation length for the entire sample. The redshift distribution of galaxies in the pencil-beam survey by Broadhurst et al. (1990) shows peaks separated about by large 'voids', at least to a redshift of about 0.3. The peaks might represent the intersections of their about 5/h Mpc pencil beams with structures similar to the Great Wall. Under this hypothesis, sampling of the Great Walls shows that l approximately 12/h Mpc is the minimum projected beam size required to detect all the 'walls' at redshifts between the peak of the selection function and the effective depth of the survey.
The mercury isotope composition of Arctic coastal seawater
NASA Astrophysics Data System (ADS)
Štrok, Marko; Baya, Pascale Anabelle; Hintelmann, Holger
2015-11-01
For the first time, Hg isotope composition of seawater in the Canadian Arctic Archipelago is reported. Hg was pre-concentrated from large volumes of seawater sampling using anion exchange resins onboard the research vessel immediately after collection. Elution of Hg was performed in laboratory followed by isotope composition determination by multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS). For comparison, seawater from two stations was shipped to the laboratory and processed within it. Results showed negative mass-dependent fractionation in the range from -2.85 to -1.10‰ for δ202Hg, as well as slightly positive mass-independent fractionation of odd Hg isotopes. Positive mass-independent fractionation of 200Hg was also observed. Samples that were pre-concentrated in the laboratory showed different Hg isotope signatures and this is most probably due to the abiotic reduction of Hg in the dark by organic matter during storage and shipment after sampling. This emphasizes the need for immediate onboard pre-concentration.