Sample records for initial sample size

  1. Particle size and surface area effects on the thin-pulse shock initiation of Diaminoazoxyfurazan (DAAF)

    NASA Astrophysics Data System (ADS)

    Burritt, Rosemary; Francois, Elizabeth; Windler, Gary; Chavez, David

    2017-06-01

    Diaminoazoxyfurazan (DAAF) has many of the safety characteristics of an insensitive high explosive (IHE): it is extremely insensitive to impact and friction and is comparable to triaminotrinitrobezene (TATB) in this way. Conversely, it demonstrates many performance characteristics of a Conventional High Explosive (CHE). DAAF has a small failure diameter of about 1.25 mm and can be sensitive to shock under the right conditions. Large particle sized DAAF will not initiate in a typical exploding foil initiator (EFI) configuration but smaller particle sizes will. Large particle sized DAAF, of 40 μm, was crash precipitated and ball milled into six distinct samples and pressed into pellets with a density of 1.60 g/cc (91% TMD). To investigate the effect of particle size and surface area on the direct initiation on DAAF multiple threshold tests were preformed on each sample of DAAF in different EFI configurations, which varied in flyer thickness and/or bridge size. Comparative tests were performed examining threshold voltage and correlated to Photon Doppler Velocimetry (PDV) results. The samples with larger particle sizes and surface area required more energy to initiate while the smaller particle sizes required less energy and could be initiated with smaller diameter flyers.

  2. A sequential bioequivalence design with a potential ethical advantage.

    PubMed

    Fuglsang, Anders

    2014-07-01

    This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.

  3. 76 FR 56141 - Notice of Intent To Request New Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-12

    ... level surveys of similar scope and size. The sample for each selected community will be strategically... of 2 hours per sample community. Full Study: The maximum sample size for the full study is 2,812... questionnaires. The initial sample size for this phase of the research is 100 respondents (10 respondents per...

  4. Replication Validity of Initial Association Studies: A Comparison between Psychiatry, Neurology and Four Somatic Diseases.

    PubMed

    Dumas-Mallet, Estelle; Button, Katherine; Boraud, Thomas; Munafo, Marcus; Gonon, François

    2016-01-01

    There are growing concerns about effect size inflation and replication validity of association studies, but few observational investigations have explored the extent of these problems. Using meta-analyses to measure the reliability of initial studies and explore whether this varies across biomedical domains and study types (cognitive/behavioral, brain imaging, genetic and "others"). We analyzed 663 meta-analyses describing associations between markers or risk factors and 12 pathologies within three biomedical domains (psychiatry, neurology and four somatic diseases). We collected the effect size, sample size, publication year and Impact Factor of initial studies, largest studies (i.e., with the largest sample size) and the corresponding meta-analyses. Initial studies were considered as replicated if they were in nominal agreement with meta-analyses and if their effect size inflation was below 100%. Nominal agreement between initial studies and meta-analyses regarding the presence of a significant effect was not better than chance in psychiatry, whereas it was somewhat better in neurology and somatic diseases. Whereas effect sizes reported by largest studies and meta-analyses were similar, most of those reported by initial studies were inflated. Among the 256 initial studies reporting a significant effect (p<0.05) and paired with significant meta-analyses, 97 effect sizes were inflated by more than 100%. Nominal agreement and effect size inflation varied with the biomedical domain and study type. Indeed, the replication rate of initial studies reporting a significant effect ranged from 6.3% for genetic studies in psychiatry to 86.4% for cognitive/behavioral studies. Comparison between eight subgroups shows that replication rate decreases with sample size and "true" effect size. We observed no evidence of association between replication rate and publication year or Impact Factor. The differences in reliability between biological psychiatry, neurology and somatic diseases suggest that there is room for improvement, at least in some subdomains.

  5. Replication Validity of Initial Association Studies: A Comparison between Psychiatry, Neurology and Four Somatic Diseases

    PubMed Central

    Dumas-Mallet, Estelle; Button, Katherine; Boraud, Thomas; Munafo, Marcus; Gonon, François

    2016-01-01

    Context There are growing concerns about effect size inflation and replication validity of association studies, but few observational investigations have explored the extent of these problems. Objective Using meta-analyses to measure the reliability of initial studies and explore whether this varies across biomedical domains and study types (cognitive/behavioral, brain imaging, genetic and “others”). Methods We analyzed 663 meta-analyses describing associations between markers or risk factors and 12 pathologies within three biomedical domains (psychiatry, neurology and four somatic diseases). We collected the effect size, sample size, publication year and Impact Factor of initial studies, largest studies (i.e., with the largest sample size) and the corresponding meta-analyses. Initial studies were considered as replicated if they were in nominal agreement with meta-analyses and if their effect size inflation was below 100%. Results Nominal agreement between initial studies and meta-analyses regarding the presence of a significant effect was not better than chance in psychiatry, whereas it was somewhat better in neurology and somatic diseases. Whereas effect sizes reported by largest studies and meta-analyses were similar, most of those reported by initial studies were inflated. Among the 256 initial studies reporting a significant effect (p<0.05) and paired with significant meta-analyses, 97 effect sizes were inflated by more than 100%. Nominal agreement and effect size inflation varied with the biomedical domain and study type. Indeed, the replication rate of initial studies reporting a significant effect ranged from 6.3% for genetic studies in psychiatry to 86.4% for cognitive/behavioral studies. Comparison between eight subgroups shows that replication rate decreases with sample size and “true” effect size. We observed no evidence of association between replication rate and publication year or Impact Factor. Conclusion The differences in reliability between biological psychiatry, neurology and somatic diseases suggest that there is room for improvement, at least in some subdomains. PMID:27336301

  6. Phase transformations in a Cu−Cr alloy induced by high pressure torsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korneva, Anna, E-mail: a.korniewa@imim.pl; Straumal, Boris; Institut für Nanotechnologie, Karlsruher Institut für Technologie, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen

    2016-04-15

    Phase transformations induced by high pressure torsion (HPT) at room temperature in two samples of the Cu-0.86 at.% Cr alloy, pre-annealed at 550 °C and 1000 °C, were studied in order to obtain two different initial states for the HPT procedure. Observation of microstructure of the samples before HPT revealed that the sample annealed at 550 °C contained two types of Cr precipitates in the Cu matrix: large particles (size about 500 nm) and small ones (size about 70 nm). The sample annealed at 1000 °C showed only a little fraction of Cr precipitates (size about 2 μm). The subsequentmore » HPT process resulted in the partial dissolution of Cr precipitates in the first sample and dissolution of Cr precipitates with simultaneous decomposition of the supersaturated solid solution in another. However, the resulting microstructure of the samples after HPT was very similar from the standpoint of grain size, phase composition, texture analysis and hardness measurements. - Highlights: • Cu−Cr alloy with two different initial states was deformed by HPT. • Phase transformations in the deformed materials were studied. • SEM, TEM and X-ray diffraction techniques were used for microstructure analysis. • HPT leads to formation the same microstructure independent of the initial state.« less

  7. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  9. Determining chewing efficiency using a solid test food and considering all phases of mastication.

    PubMed

    Liu, Ting; Wang, Xinmiao; Chen, Jianshe; van der Glas, Hilbert W

    2018-07-01

    Following chewing a solid food, the median particle size, X 50 , is determined after N chewing cycles, by curve-fitting of the particle size distribution. Reduction of X 50 with N is traditionally followed from N ≥ 15-20 cycles when using the artificial test food Optosil ® , because of initially unreliable values of X 50 . The aims of the study were (i) to enable testing at small N-values by using initial particles of appropriate size, shape and amount, and (ii) to compare measures of chewing ability, i.e. chewing efficiency (N needed to halve the initial particle size, N(1/2-Xo)) and chewing performance (X 50 at a particular N-value, X 50,N ). 8 subjects with a natural dentition chewed 4 types of samples of Optosil particles: (1) 8 cubes of 8 mm, border size relative to bin size (traditional test), (2) 9 half-cubes of 9.6 mm, mid-size; similar sample volume, (3) 4 half-cubes of 9.6 mm, and 2 half-cubes of 9.6 mm; reduced particle number and sample volume. All samples were tested with 4 N-values. Curve-fitting with a 2nd order polynomial function yielded log(X 50 )-log(N) relationships, after which N(1/2-Xo) and X 50,N were obtained. Reliable X 50 -values are obtained for all N-values when using half-cubes with a mid-size relative to bin sizes. By using 2 or 4 half-cubes, determination of N(1/2-Xo) or X 50,N needs less chewing cycles than traditionally. Chewing efficiency is preferable over chewing performance because of a comparison of inter-subject chewing ability at the same stage of food comminution and constant intra-subject and inter-subject ratios between and within samples respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Magnetic hyperthermia in water based ferrofluids: Effects of initial susceptibility and size polydispersity on heating efficiency

    NASA Astrophysics Data System (ADS)

    Lahiri, B. B.; Ranoo, Surojit; Muthukumaran, T.; Philip, John

    2018-04-01

    The effects of initial susceptibility and size polydispersity on magnetic hyperthermia efficiency in two water based ferrofluids containing phosphate and TMAOH coated superparamagnetic Fe3O4 nanoparticles were studied. Experiments were performed at a fixed frequency of 126 kHz on four different concentrations of both samples and under different external field amplitudes. It was observed that for field amplitudes beyond 45.0 kAm-1, the maximum temperature rise was in the vicinity of 42°C (hyperthermia limit) which indicated the suitability of the water based ferrofluids for hyperthermia applications. The maximum temperature rise and specific absorption rate were found to vary linearly with square of the applied field amplitudes, in accordance with theoretical predictions. It was further observed that for a fixed sample concentration, specific absorption rate was higher for the phosphate coated samples which was attributed to the higher initial static susceptibility and lower size polydispersity of phosphate coated Fe3O4.

  11. What is an adequate sample size? Operationalising data saturation for theory-based interview studies.

    PubMed

    Francis, Jill J; Johnston, Marie; Robertson, Clare; Glidewell, Liz; Entwistle, Vikki; Eccles, Martin P; Grimshaw, Jeremy M

    2010-12-01

    In interview studies, sample size is often justified by interviewing participants until reaching 'data saturation'. However, there is no agreed method of establishing this. We propose principles for deciding saturation in theory-based interview studies (where conceptual categories are pre-established by existing theory). First, specify a minimum sample size for initial analysis (initial analysis sample). Second, specify how many more interviews will be conducted without new ideas emerging (stopping criterion). We demonstrate these principles in two studies, based on the theory of planned behaviour, designed to identify three belief categories (Behavioural, Normative and Control), using an initial analysis sample of 10 and stopping criterion of 3. Study 1 (retrospective analysis of existing data) identified 84 shared beliefs of 14 general medical practitioners about managing patients with sore throat without prescribing antibiotics. The criterion for saturation was achieved for Normative beliefs but not for other beliefs or studywise saturation. In Study 2 (prospective analysis), 17 relatives of people with Paget's disease of the bone reported 44 shared beliefs about taking genetic testing. Studywise data saturation was achieved at interview 17. We propose specification of these principles for reporting data saturation in theory-based interview studies. The principles may be adaptable for other types of studies.

  12. Grain Size and Phase Purity Characterization of U 3Si 2 Pellet Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoggan, Rita E.; Tolman, Kevin R.; Cappia, Fabiola

    Characterization of U 3Si 2 fresh fuel pellets is important for quality assurance and validation of the finished product. Grain size measurement methods, phase identification methods using scanning electron microscopes equipped with energy dispersive spectroscopy and x-ray diffraction, and phase quantification methods via image analysis have been developed and implemented on U 3Si 2 pellet samples. A wide variety of samples have been characterized including representative pellets from an initial irradiation experiment, and samples produced using optimized methods to enhance phase purity from an extended fabrication effort. The average grain size for initial pellets was between 16 and 18 µm.more » The typical average grain size for pellets from the extended fabrication was between 20 and 30 µm with some samples exhibiting irregular grain growth. Pellets from the latter half of extended fabrication had a bimodal grain size distribution consisting of coarsened grains (>80 µm) surrounded by the typical (20-30 µm) grain structure around the surface. Phases identified in initial uranium silicide pellets included: U 3Si 2 as the main phase composing about 80 vol. %, Si rich phases (USi and U 5Si 4) composing about 13 vol. %, and UO 2 composing about 5 vol. %. Initial batches from the extended U 3Si 2 pellet fabrication had similar phases and phase quantities. The latter half of the extended fabrication pellet batches did not contain Si rich phases, and had between 1-5% UO 2: achieving U 3Si 2 phase purity between 95 vol. % and 98 vol. % U 3Si 2. The amount of UO 2 in sintered U 3Si 2 pellets is correlated to the length of time between U 3Si 2 powder fabrication and pellet formation. These measurements provide information necessary to optimize fabrication efforts and a baseline for future work on this fuel compound.« less

  13. Factors Affecting Pathogen Survival in Finished Dairy Compost with Different Particle Sizes Under Greenhouse Conditions.

    PubMed

    Diao, Junshu; Chen, Zhao; Gong, Chao; Jiang, Xiuping

    2015-09-01

    This study investigated the survival of Escherichia coli O157:H7 and Salmonella Typhimurium in finished dairy compost with different particle sizes during storage as affected by moisture content and temperature under greenhouse conditions. The mixture of E. coli O157:H7 and S. Typhimurium strains was inoculated into the finished composts with moisture contents of 20, 30, and 40%, separately. The finished compost samples were then sieved into 3 different particle sizes (>1000, 500-1000, and <500 μm) and stored under greenhouse conditions. For compost samples with moisture contents of 20 and 30%, the average Salmonella reductions in compost samples with particle sizes of >1000, 500-1000, and <500 μm were 2.15, 2.27, and 2.47 log colony-forming units (CFU) g(-1) within 5 days of storage in summer, respectively, as compared with 1.60, 2.03, and 2.26 log CFU g(-1) in late fall, respectively, and 2.61, 3.33, and 3.67 log CFU g(-1) in winter, respectively. The average E. coli O157:H7 reductions in compost samples with particle sizes of >1000, 500-1000, and <500 μm were 1.98, 2.30, and 2.54 log CFU g(-1) within 5 days of storage in summer, respectively, as compared with 1.70, 2.56, and 2.90 log CFU g(-1) in winter, respectively. Our results revealed that both Salmonella and E. coli O157:H7 in compost samples with larger particle size survived better than those with smaller particle sizes, and the initial rapid moisture loss in compost may contribute to the fast inactivation of pathogens in the finished compost. For the same season, the pathogens in the compost samples with the same particle size survived much better at the initial moisture content of 20% compared to 40%.

  14. Studies on the relation between the size and dispersion of metallic silver nanoparticles and morphologies of initial silver(I) coordination polymer precursor

    NASA Astrophysics Data System (ADS)

    Moradi, Zhaleh; Akhbari, Kamran; Phuruangrat, Anukorn; Costantino, Ferdinando

    2017-04-01

    Micro and nano-structures of [Ag2(μ2-dcpa)2]n (1), [Hdcpa = 2,4-dichlorophenoxyacetic acid] which is a one-dimensional coordination polymer with corrugated tape chains, were synthesized as the bulk sample (1B), by sonochemical process (1S) and from mechanochemical reaction (1M). These three samples have been used as new precursors for fabricating silver nanoparticles via direct calcination at 300 °C and also thermal decomposition in oleic acid (OA) as a surfactant at 180 °C. In the presence of OA less agglomerated nanostructures were formed. It seems that the size, dispersion, morphology and agglomeration of initial precursor have direct influence on size, dispersion, morphology and agglomeration of metallic silver. This coordination polymer with various micro and nano morphologies were characterized by X-ray powder diffraction (XRD) and scanning electron microscopy (SEM). Thermal stability of these samples were studied and compared with each other, too.

  15. A risk assessment method for multi-site damage

    NASA Astrophysics Data System (ADS)

    Millwater, Harry Russell, Jr.

    This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.

  16. Survival analysis and classification methods for forest fire size

    PubMed Central

    2018-01-01

    Factors affecting wildland-fire size distribution include weather, fuels, and fire suppression activities. We present a novel application of survival analysis to quantify the effects of these factors on a sample of sizes of lightning-caused fires from Alberta, Canada. Two events were observed for each fire: the size at initial assessment (by the first fire fighters to arrive at the scene) and the size at “being held” (a state when no further increase in size is expected). We developed a statistical classifier to try to predict cases where there will be a growth in fire size (i.e., the size at “being held” exceeds the size at initial assessment). Logistic regression was preferred over two alternative classifiers, with covariates consistent with similar past analyses. We conducted survival analysis on the group of fires exhibiting a size increase. A screening process selected three covariates: an index of fire weather at the day the fire started, the fuel type burning at initial assessment, and a factor for the type and capabilities of the method of initial attack. The Cox proportional hazards model performed better than three accelerated failure time alternatives. Both fire weather and fuel type were highly significant, with effects consistent with known fire behaviour. The effects of initial attack method were not statistically significant, but did suggest a reverse causality that could arise if fire management agencies were to dispatch resources based on a-priori assessment of fire growth potentials. We discuss how a more sophisticated analysis of larger data sets could produce unbiased estimates of fire suppression effect under such circumstances. PMID:29320497

  17. Survival analysis and classification methods for forest fire size.

    PubMed

    Tremblay, Pier-Olivier; Duchesne, Thierry; Cumming, Steven G

    2018-01-01

    Factors affecting wildland-fire size distribution include weather, fuels, and fire suppression activities. We present a novel application of survival analysis to quantify the effects of these factors on a sample of sizes of lightning-caused fires from Alberta, Canada. Two events were observed for each fire: the size at initial assessment (by the first fire fighters to arrive at the scene) and the size at "being held" (a state when no further increase in size is expected). We developed a statistical classifier to try to predict cases where there will be a growth in fire size (i.e., the size at "being held" exceeds the size at initial assessment). Logistic regression was preferred over two alternative classifiers, with covariates consistent with similar past analyses. We conducted survival analysis on the group of fires exhibiting a size increase. A screening process selected three covariates: an index of fire weather at the day the fire started, the fuel type burning at initial assessment, and a factor for the type and capabilities of the method of initial attack. The Cox proportional hazards model performed better than three accelerated failure time alternatives. Both fire weather and fuel type were highly significant, with effects consistent with known fire behaviour. The effects of initial attack method were not statistically significant, but did suggest a reverse causality that could arise if fire management agencies were to dispatch resources based on a-priori assessment of fire growth potentials. We discuss how a more sophisticated analysis of larger data sets could produce unbiased estimates of fire suppression effect under such circumstances.

  18. The Mars Orbital Catalog of Hydrated Alteration Signatures (MOCHAS) - Initial release

    NASA Astrophysics Data System (ADS)

    Carter, John; OMEGA and CRISM Teams

    2016-10-01

    Aqueous minerals have been identified from orbit at a number of localities, and their analysis allowed refining the water story of Early Mars. They are also a main science driver when selecting current and upcoming landing sites for roving missions.Available catalogs of mineral detections exhibit a number of drawbacks such as a limited sample size (a thousand sites at most), inhomogeneous sampling of the surface and of the investigation methods, and the lack of contextual information (e.g. spatial extent, morphological context). The MOCHAS project strives to address such limitations by providing a global, detailed survey of aqueous minerals on Mars based on 10 years of data from the OMEGA and CRISM imaging spectrometers. Contextual data is provided, including deposit sizes, morphology and detailed composition when available. Sampling biases are also addressed.It will be openly distributed in GIS-ready format and will be participative. For example, it will be possible for researchers to submit requests for specific mapping of regions of interest, or add/refine mineral detections.An initial release is scheduled in Fall 2016 and will feature a two orders of magnitude increase in sample size compared to previous studies.

  19. Mechanisms of Laser-Induced Dissection and Transport of Histologic Specimens

    PubMed Central

    Vogel, Alfred; Lorenz, Kathrin; Horneffer, Verena; Hüttmann, Gereon; von Smolinski, Dorthe; Gebert, Andreas

    2007-01-01

    Rapid contact- and contamination-free procurement of histologic material for proteomic and genomic analysis can be achieved by laser microdissection of the sample of interest followed by laser-induced transport (laser pressure catapulting). The dynamics of laser microdissection and laser pressure catapulting of histologic samples of 80 μm diameter was investigated by means of time-resolved photography. The working mechanism of microdissection was found to be plasma-mediated ablation initiated by linear absorption. Catapulting was driven by plasma formation when tightly focused pulses were used, and by photothermal ablation at the bottom of the sample when defocused pulses producing laser spot diameters larger than 35 μm were used. With focused pulses, driving pressures of several hundred MPa accelerated the specimen to initial velocities of 100–300 m/s before they were rapidly slowed down by air friction. When the laser spot was increased to a size comparable to or larger than the sample diameter, both driving pressure and flight velocity decreased considerably. Based on a characterization of the thermal and optical properties of the histologic specimens and supporting materials used, we calculated the evolution of the heat distribution in the sample. Selected catapulted samples were examined by scanning electron microscopy or analyzed by real-time reverse-transcriptase polymerase chain reaction. We found that catapulting of dissected samples results in little collateral damage when the laser pulses are either tightly focused or when the laser spot size is comparable to the specimen size. By contrast, moderate defocusing with spot sizes up to one-third of the specimen diameter may involve significant heat and ultraviolet exposure. Potential side effects are maximal when samples are catapulted directly from a glass slide without a supporting polymer foil. PMID:17766336

  20. Random Distribution Pattern and Non-adaptivity of Genome Size in a Highly Variable Population of Festuca pallens

    PubMed Central

    Šmarda, Petr; Bureš, Petr; Horová, Lucie

    2007-01-01

    Background and Aims The spatial and statistical distribution of genome sizes and the adaptivity of genome size to some types of habitat, vegetation or microclimatic conditions were investigated in a tetraploid population of Festuca pallens. The population was previously documented to vary highly in genome size and is assumed as a model for the study of the initial stages of genome size differentiation. Methods Using DAPI flow cytometry, samples were measured repeatedly with diploid Festuca pallens as the internal standard. Altogether 172 plants from 57 plots (2·25 m2), distributed in contrasting habitats over the whole locality in South Moravia, Czech Republic, were sampled. The differences in DNA content were confirmed by the double peaks of simultaneously measured samples. Key Results At maximum, a 1·115-fold difference in genome size was observed. The statistical distribution of genome sizes was found to be continuous and best fits the extreme (Gumbel) distribution with rare occurrences of extremely large genomes (positive-skewed), as it is similar for the log-normal distribution of the whole Angiosperms. Even plants from the same plot frequently varied considerably in genome size and the spatial distribution of genome sizes was generally random and unautocorrelated (P > 0·05). The observed spatial pattern and the overall lack of correlations of genome size with recognized vegetation types or microclimatic conditions indicate the absence of ecological adaptivity of genome size in the studied population. Conclusions These experimental data on intraspecific genome size variability in Festuca pallens argue for the absence of natural selection and the selective non-significance of genome size in the initial stages of genome size differentiation, and corroborate the current hypothetical model of genome size evolution in Angiosperms (Bennetzen et al., 2005, Annals of Botany 95: 127–132). PMID:17565968

  1. Microwave Heating of Synthetic Skin Samples for Potential Treatment of Gout Using the Metal-Assisted and Microwave-Accelerated Decrystallization Technique

    PubMed Central

    2016-01-01

    Physical stability of synthetic skin samples during their exposure to microwave heating was investigated to demonstrate the use of the metal-assisted and microwave-accelerated decrystallization (MAMAD) technique for potential biomedical applications. In this regard, optical microscopy and temperature measurements were employed for the qualitative and quantitative assessment of damage to synthetic skin samples during 20 s intermittent microwave heating using a monomode microwave source (at 8 GHz, 2–20 W) up to 120 s. The extent of damage to synthetic skin samples, assessed by the change in the surface area of skin samples, was negligible for microwave power of ≤7 W and more extensive damage (>50%) to skin samples occurred when exposed to >7 W at initial temperature range of 20–39 °C. The initial temperature of synthetic skin samples significantly affected the extent of change in temperature of synthetic skin samples during their exposure to microwave heating. The proof of principle use of the MAMAD technique was demonstrated for the decrystallization of a model biological crystal (l-alanine) placed under synthetic skin samples in the presence of gold nanoparticles. Our results showed that the size (initial size ∼850 μm) of l-alanine crystals can be reduced up to 60% in 120 s without damage to synthetic skin samples using the MAMAD technique. Finite-difference time-domain-based simulations of the electric field distribution of an 8 GHz monomode microwave radiation showed that synthetic skin samples are predicted to absorb ∼92.2% of the microwave radiation. PMID:27917407

  2. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    NASA Astrophysics Data System (ADS)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  3. Sample size considerations using mathematical models: an example with Chlamydia trachomatis infection and its sequelae pelvic inflammatory disease.

    PubMed

    Herzog, Sereina A; Low, Nicola; Berghold, Andrea

    2015-06-19

    The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.

  4. The relationship between national-level carbon dioxide emissions and population size: an assessment of regional and temporal variation, 1960-2005.

    PubMed

    Jorgenson, Andrew K; Clark, Brett

    2013-01-01

    This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region.

  5. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  6. Sample Size Estimation for Alzheimer's Disease Trials from Japanese ADNI Serial Magnetic Resonance Imaging.

    PubMed

    Fujishima, Motonobu; Kawaguchi, Atsushi; Maikusa, Norihide; Kuwano, Ryozo; Iwatsubo, Takeshi; Matsuda, Hiroshi

    2017-01-01

    Little is known about the sample sizes required for clinical trials of Alzheimer's disease (AD)-modifying treatments using atrophy measures from serial brain magnetic resonance imaging (MRI) in the Japanese population. The primary objective of the present study was to estimate how large a sample size would be needed for future clinical trials for AD-modifying treatments in Japan using atrophy measures of the brain as a surrogate biomarker. Sample sizes were estimated from the rates of change of the whole brain and hippocampus by the k-means normalized boundary shift integral (KN-BSI) and cognitive measures using the data of 537 Japanese Alzheimer's Neuroimaging Initiative (J-ADNI) participants with a linear mixed-effects model. We also examined the potential use of ApoE status as a trial enrichment strategy. The hippocampal atrophy rate required smaller sample sizes than cognitive measures of AD and mild cognitive impairment (MCI). Inclusion of ApoE status reduced sample sizes for AD and MCI patients in the atrophy measures. These results show the potential use of longitudinal hippocampal atrophy measurement using automated image analysis as a progression biomarker and ApoE status as a trial enrichment strategy in a clinical trial of AD-modifying treatment in Japanese people.

  7. Inactivation of Alicyclobacillus acidoterrestris ATCC 49025 spores in apple juice by pulsed light. Influence of initial contamination and required reduction levels.

    PubMed

    Ferrario, Mariana I; Guerrero, Sandra N

    The purpose of this study was to analyze the response of different initial contamination levels of Alicyclobacillus acidoterrestris ATCC 49025 spores in apple juice as affected by pulsed light treatment (PL, batch mode, xenon lamp, 3pulses/s, 0-71.6J/cm 2 ). Biphasic and Weibull frequency distribution models were used to characterize the relationship between inoculum size and treatment time with the reductions achieved after PL exposure. Additionally, a second order polynomial model was computed to relate required PL processing time to inoculum size and requested log reductions. PL treatment caused up to 3.0-3.5 log reductions, depending on the initial inoculum size. Inactivation curves corresponding to PL-treated samples were adequately characterized by both Weibull and biphasic models (R adj 2 94-96%), and revealed that lower initial inoculum sizes were associated with higher inactivation rates. According to the polynomial model, the predicted time for PL treatment increased exponentially with inoculum size. Copyright © 2017 Asociación Argentina de Microbiología. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    PubMed

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.

  9. Variation in aluminum, iron, and particle concentrations in oxic groundwater samples collected by use of tangential-flow ultrafiltration with low-flow sampling

    NASA Astrophysics Data System (ADS)

    Szabo, Zoltan; Oden, Jeannette H.; Gibs, Jacob; Rice, Donald E.; Ding, Yuan

    2002-02-01

    Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering.

  10. Determination of the influence of dispersion pattern of pesticide-resistant individuals on the reliability of resistance estimates using different sampling plans.

    PubMed

    Shah, R; Worner, S P; Chapman, R B

    2012-10-01

    Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.

  11. Exploring Outcomes and Initial Self-Report of Client Motivation in a College Counseling Center

    ERIC Educational Resources Information Center

    Ilagan, Guy; Vinson, Michael L.; Sharp, Julia L.; Ilagan, Jill; Oberman, Aaron

    2015-01-01

    Objective: To explore the association between college counseling center clients' initial self-report of motivation and counseling outcome. Participants: The sample was composed of 331 student clients who utilized a college counseling center from August 2007 to August 2009. The college is a public, mid-size, urban university in the Southeast.…

  12. Investigating the effect of sputtering conditions on the physical properties of aluminum thin film and the resulting alumina template

    NASA Astrophysics Data System (ADS)

    Taheriniya, Shabnam; Parhizgar, Sara Sadat; Sari, Amir Hossein

    2018-06-01

    To study the alumina template pore size distribution as a function of Al thin film grain size distribution, porous alumina templates were prepared by anodizing sputtered aluminum thin films. To control the grain size the aluminum samples were sputtered with the rate of 0.5, 1 and 2 Å/s and the substrate temperature was either 25, 75 or 125 °C. All samples were anodized for 120 s in 1 M sulfuric acid solution kept at 1 °C while a 15 V potential was being applied. The standard deviation value for samples deposited at room temperature but with different rates is roughly 2 nm in both thin film and porous template form but it rises to approximately 4 nm with substrate temperature. Samples with the average grain size of 13, 14, 18.5 and 21 nm respectively produce alumina templates with an average pore size of 8.5, 10, 15 and 16 nm in that order which shows the average grain size limits the average pore diameter in the resulting template. Lateral correlation length and grain boundary effect are other factors that affect the pore formation process and pore size distribution by limiting the initial current density.

  13. Replication and contradiction of highly cited research papers in psychiatry: 10-year follow-up.

    PubMed

    Tajika, Aran; Ogawa, Yusuke; Takeshima, Nozomi; Hayasaka, Yu; Furukawa, Toshi A

    2015-10-01

    Contradictions and initial overestimates are not unusual among highly cited studies. However, this issue has not been researched in psychiatry. Aims: To assess how highly cited studies in psychiatry are replicated by subsequent studies. We selected highly cited studies claiming effective psychiatric treatments in the years 2000 through 2002. For each of these studies we searched for subsequent studies with a better-controlled design, or with a similar design but a larger sample. Among 83 articles recommending effective interventions, 40 had not been subject to any attempt at replication, 16 were contradicted, 11 were found to have substantially smaller effects and only 16 were replicated. The standardised mean differences of the initial studies were overestimated by 132%. Studies with a total sample size of 100 or more tended to produce replicable results. Caution is needed when a study with a small sample size reports a large effect. © The Royal College of Psychiatrists 2015.

  14. Degradation of radiator performance on Mars due to dust

    NASA Technical Reports Server (NTRS)

    Gaier, James R.; Perez-Davis, Marla E.; Rutledge, Sharon K.; Forkapa, Mark

    1992-01-01

    An artificial mineral of the approximate elemental composition of Martian soil was manufactured, crushed, and sorted into four different size ranges. Dust particles from three of these size ranges were applied to arc-textured Nb-1 percent Zr and Cu radiator surfaces to assess their effect on radiator performance. Particles larger than 75 microns did not have sufficient adhesive forces to adhere to the samples at angles greater than about 27 deg. Pre-deposited dust layers were largely removed by clear wind velocities greater than 40 m/s, or by dust-laden wind velocities as low as 25 m/s. Smaller dust grains were more difficult to remove. Abrasion was found to be significant only in high velocity winds (89 m/s or greater). Dust-laden winds were found to be more abrasive than clear wind. Initially dusted samples abraded less than initially clear samples in dust laden wind. Smaller dust particles of the simulant proved to be more abrasive than large. This probably indicates that the larger particles were in fact agglomerates.

  15. The Relationship between National-Level Carbon Dioxide Emissions and Population Size: An Assessment of Regional and Temporal Variation, 1960–2005

    PubMed Central

    Jorgenson, Andrew K.; Clark, Brett

    2013-01-01

    This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region. PMID:23437323

  16. Effect of flaw size and temperature on the matrix cracking behavior of a brittle ceramic matrix composite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anandakumar, U.; Webb, J.E.; Singh, R.N.

    The matrix cracking behavior of a zircon matrix - uniaxial SCS 6 fiber composite was studied as a function of initial flaw size and temperature. The composites were fabricated by a tape casting and hot pressing technique. Surface flaws of controlled size were introduced using a vicker`s indenter. The composite samples were tested in three point flexure at three different temperatures to study the non steady state and steady state matrix cracking behavior. The composite samples exhibited steady state and non steady matrix cracking behavior at all temperatures. The steady state matrix cracking stress and steady state crack size increasedmore » with increasing temperature. The results of the study correlated well with the results predicted by the matrix cracking models.« less

  17. Should particle size analysis data be combined with EPA approved sampling method data in the development of AP-42 emission factors?

    USDA-ARS?s Scientific Manuscript database

    A cotton ginning industry-supported project was initiated in 2008 and completed in 2013 to collect additional data for U.S. Environmental Protection Agency’s (EPA) Compilation of Air Pollution Emission Factors (AP-42) for PM10 and PM2.5. Stack emissions were collected using particle size distributio...

  18. Experimental strategies for imaging bioparticles with femtosecond hard X-ray pulses

    DOE PAGES

    Daurer, Benedikt J.; Okamoto, Kenta; Bielecki, Johan; ...

    2017-04-07

    This study explores the capabilities of the Coherent X-ray Imaging Instrument at the Linac Coherent Light Source to image small biological samples. The weak signal from small samples puts a significant demand on the experiment. AerosolizedOmono River virusparticles of ~40 nm in diameter were injected into the submicrometre X-ray focus at a reduced pressure. Diffraction patterns were recorded on two area detectors. The statistical nature of the measurements from many individual particles provided information about the intensity profile of the X-ray beam, phase variations in the wavefront and the size distribution of the injected particles. The results point to amore » wider than expected size distribution (from ~35 to ~300 nm in diameter). This is likely to be owing to nonvolatile contaminants from larger droplets during aerosolization and droplet evaporation. The results suggest that the concentration of nonvolatile contaminants and the ratio between the volumes of the initial droplet and the sample particles is critical in such studies. The maximum beam intensity in the focus was found to be 1.9 × 10 12photons per µm 2per pulse. The full-width of the focus at half-maximum was estimated to be 500 nm (assuming 20% beamline transmission), and this width is larger than expected. Under these conditions, the diffraction signal from a sample-sized particle remained above the average background to a resolution of 4.25 nm. Finally, the results suggest that reducing the size of the initial droplets during aerosolization is necessary to bring small particles into the scope of detailed structural studies with X-ray lasers.« less

  19. Re-estimating sample size in cluster randomised trials with active recruitment within clusters.

    PubMed

    van Schie, S; Moerbeek, M

    2014-08-30

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster level and individual level variance should be known before the study starts, but this is often not the case. We suggest using an internal pilot study design to address this problem of unknown variances. A pilot can be useful to re-estimate the variances and re-calculate the sample size during the trial. Using simulated data, it is shown that an initially low or high power can be adjusted using an internal pilot with the type I error rate remaining within an acceptable range. The intracluster correlation coefficient can be re-estimated with more precision, which has a positive effect on the sample size. We conclude that an internal pilot study design may be used if active recruitment is feasible within a limited number of clusters. Copyright © 2014 John Wiley & Sons, Ltd.

  20. VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS

    PubMed Central

    Huang, Jian; Horowitz, Joel L.; Wei, Fengrong

    2010-01-01

    We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739

  1. 40 CFR Appendix A to Subpart E of... - Interim Transmission Electron Microscopy Analytical Methods-Mandatory and Nonmandatory-and...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... pore size less than or equal to 0.45 µm. 6. Place these filters in series with a 5.0 µm backup filter... for not more than 30 seconds and replacing it at the time of sampling before sampling is initiated at.... Ensure that the sampler is turned upright before interrupting the pump flow. 21. Check that all samples...

  2. 40 CFR Appendix A to Subpart E of... - Interim Transmission Electron Microscopy Analytical Methods-Mandatory and Nonmandatory-and...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... pore size less than or equal to 0.45 µm. 6. Place these filters in series with a 5.0 µm backup filter... for not more than 30 seconds and replacing it at the time of sampling before sampling is initiated at.... Ensure that the sampler is turned upright before interrupting the pump flow. 21. Check that all samples...

  3. 40 CFR Appendix A to Subpart E of... - Interim Transmission Electron Microscopy Analytical Methods-Mandatory and Nonmandatory-and...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... pore size less than or equal to 0.45 µm. 6. Place these filters in series with a 5.0 µm backup filter... for not more than 30 seconds and replacing it at the time of sampling before sampling is initiated at.... Ensure that the sampler is turned upright before interrupting the pump flow. 21. Check that all samples...

  4. Demonstration of Multi- and Single-Reader Sample Size Program for Diagnostic Studies software.

    PubMed

    Hillis, Stephen L; Schartz, Kevin M

    2015-02-01

    The recently released software Multi- and Single-Reader Sample Size Sample Size Program for Diagnostic Studies , written by Kevin Schartz and Stephen Hillis, performs sample size computations for diagnostic reader-performance studies. The program computes the sample size needed to detect a specified difference in a reader performance measure between two modalities, when using the analysis methods initially proposed by Dorfman, Berbaum, and Metz (DBM) and Obuchowski and Rockette (OR), and later unified and improved by Hillis and colleagues. A commonly used reader performance measure is the area under the receiver-operating-characteristic curve. The program can be used with typical common reader-performance measures which can be estimated parametrically or nonparametrically. The program has an easy-to-use step-by-step intuitive interface that walks the user through the entry of the needed information. Features of the software include the following: (1) choice of several study designs; (2) choice of inputs obtained from either OR or DBM analyses; (3) choice of three different inference situations: both readers and cases random, readers fixed and cases random, and readers random and cases fixed; (4) choice of two types of hypotheses: equivalence or noninferiority; (6) choice of two output formats: power for specified case and reader sample sizes, or a listing of case-reader combinations that provide a specified power; (7) choice of single or multi-reader analyses; and (8) functionality in Windows, Mac OS, and Linux.

  5. Fatigue-Induced Damage in Zr-Based Bulk Metallic Glasses

    PubMed Central

    Chuang, Chih-Pin; Yuan, Tao; Dmowski, Wojciech; Wang, Gong-Yao; Freels, Matt; Liaw, Peter K.; Li, Ran; Zhang, Tao

    2013-01-01

    In the present work, we investigate the effect of “fatigue” on the fatigue behavior and atomic structure of Zr-based BMGs. Fatigue experiments on the failed-by-fatigue samples indicate that the remnants generally have similar or longer fatigue life than the as-cast samples. Meanwhile, the pair-distribution-function (PDF) analysis of the as-cast and post-fatigue samples showed very small changes of local atomic structures. These observations suggest that the fatigue life of the 6-mm in-diameter Zr-based BMG is dominated by the number of pre-existing crack-initiation sites in the sample. Once the crack initiates in the specimen, the fatigue-induced damage is accumulated locally on these initiated sites, while the rest of the region deforms elastically. The results suggest that the fatigue failure of BMGs under compression-compression fatigue experiments is a defect-controlled process. The present work indicates the significance of the improved fatigue resistance with decreasing the sample size. PMID:23999496

  6. Influence of grain size and texture prior to warm rolling on microstructure, texture and magnetic properties of Fe-6.5 wt% Si steel

    NASA Astrophysics Data System (ADS)

    Xu, H. J.; Xu, Y. B.; Jiao, H. T.; Cheng, S. F.; Misra, R. D. K.; Li, J. P.

    2018-05-01

    Fe-6.5 wt% Si steel hot bands with different initial grain size and texture were obtained through different annealing treatment. These bands were then warm rolled and annealed. An analysis on the evolution of microstructure and texture, particularly the formation of recrystallization texture was studied. The results indicated that initial grain size and texture had a significant effect on texture evolution and magnetic properties. Large initial grains led to coarse deformed grains with dense and long shear bands after warm rolling. Such long shear bands resulted in growth advantage for {1 1 3} 〈3 6 1〉 oriented grains during recrystallization. On the other hand, sharp {11 h} 〈1, 2, 1/h〉 (α∗-fiber) texture in the coarse-grained sample led to dominant {1 1 2} 〈1 1 0〉 texture after warm rolling. Such {1 1 2} 〈1 1 0〉 deformed grains provided massive nucleation sites for {1 1 3} 〈3 6 1〉 oriented grains during subsequent recrystallization. These {1 1 3} 〈3 6 1〉 grains were confirmed to exhibit an advantage on grain growth compared to γ-fiber grains. As a result, significant {1 1 3} 〈3 6 1〉 texture was developed and unfavorable γ-fiber texture was inhibited in the final annealed sheet. Both these aspects led to superior magnetic properties in the sample with largest initial grain size. The magnetic induction B8 was 1.36 T and the high frequency core loss P10/400 was 17.07 W/kg.

  7. Particle size fractionation as a method for characterizing the nutrient content of municipal green waste used for composting.

    PubMed

    Haynes, R J; Belyaeva, O N; Zhou, Y-F

    2015-01-01

    In order to better characterize mechanically shredded municipal green waste used for composting, five samples from different origins were separated into seven particle size fractions (>20mm, 10-20mm, 5-10mm, 2-5mm, 1-2mm, 0.5-1.0mm and <0.5mm diameter) and analyzed for organic C and nutrient content. With decreasing particle size there was a decrease in organic C content and an increase in macronutrient, micronutrient and ash content. This reflected a concentration of lignified woody material in the larger particle fractions and of green stems and leaves and soil in the smaller particle sizes. The accumulation of nutrients in the smaller sized fractions means the practice of using large particle sizes for green fuel and/or mulch does not greatly affect nutrient cycling via green waste composting. During a 100-day incubation experiment, using different particle size fractions of green waste, there was a marked increase in both cumulative CO2 evolution and mineral N accumulation with decreasing particle size. Results suggested that during composting of bulk green waste (with a high initial C/N ratio such as 50:1), mineral N accumulates because decomposition and net N immobilization in larger particles is slow while net N mineralization proceeds rapidly in the smaller (<1mm dia.) fractions. Initially, mineral N accumulated in green waste as NH4(+)-N, but over time, nitrification proceeded resulting in accumulation of NO3(-)-N. It was concluded that the nutrient content, N mineralization potential and decomposition rate of green waste differs greatly among particle size fractions and that chemical analysis of particle size fractions provides important additional information over that of a bulk sample. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. The effect of initial pressure on growth of FeNPs in amorphous carbon films

    NASA Astrophysics Data System (ADS)

    Mashayekhi, Fatemeh; Shafiekhani, Azizollah; Sebt, S. Ali; Darabi, Elham

    2018-04-01

    Iron nanoparticles in amorphous hydrogenated carbon films (FeNPs@a-C:H) were prepared with RF-sputtering and RFPECVD methods by acetylene gas and Fe target. In this paper, deposition and sputtering process were carried out under influence of different initial pressure gas. The morphology and roughness of surface of samples were studied by AFM technique and also TEM images show the exact size of FeNPs and encapsulated FeNPs@a-C:H. The localized surface plasmon resonance peak (LSPR) of FeNPs was studied using UV-vis absorption spectrum. The results show that the intensity and position of LSPR peak are increased by increasing initial pressure. Also, direct energy gap of samples obtained by Tauc law is decreased with respect to increasing initial pressure.

  9. Preparation and Characterization of Niobium Doped Lead-Telluride Glass Ceramics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sathish, M.; Eraiah, B.; Anavekar, R. V.

    2011-07-15

    Niobium-lead-telluride glass ceramics of composition xNb{sub 2}O{sub 5}-(20-x) pbO-80TeO{sub 2}(where x = 0.1 mol% to 0.5 mol%) were prepared by using conventional melt quenching method. The prepared glass samples were initially amorphous in nature after annealed at 400 deg. c all samples were crystallized. This was confined by X-ray diffraction and scanning electron microscopy. The particle size of these glass ceramics have been calculated by using Debye-Scherer formula and the particle size is in the order of 15 nm to 60 nm. The scanning electron microscopy (SEM) photograph shows the presence of needle-like crystals in these samples.

  10. Study of Initial Stages of Ball-Milling of Cu Powder Using X-ray Diffraction

    NASA Astrophysics Data System (ADS)

    Gayathri, N.; Mukherjee, Paramita

    2018-04-01

    The initial stage of size refinement of Cu powder is studied using detailed X-ray diffraction (XRD) analysis to understand the mechanism of formation of nanomaterials during the ball-milling process. The study was restricted to samples obtained for milling time up to 240 min to understand the deformation mechanism at the early stages of ball milling. Various model based approaches for the analysis of the XRD were used to study the evolution of the microstructural parameters such as domain size and microstrain along the different crystallographic planes. It was seen that the domain size saturates at a low value along the (311) plane whereas the size along the (220) and (200) plane is still higher. The r.m.s microstrain showed a non-monotonic change along the different crystallographic directions up to the milling time of 240 min.

  11. Mapping South San Francisco Bay's seabed diversity for use in wetland restoration planning

    USGS Publications Warehouse

    Fregoso, Theresa A.; Jaffe, B.; Rathwell, G.; Collins, W.; Rhynas, K.; Tomlin, V.; Sullivan, S.

    2006-01-01

    Data for an acoustic seabed classification were collected as a part of a California Coastal Conservancy funded bathymetric survey of South Bay in early 2005.  A QTC VIEW seabed classification system recorded echoes from a sungle bean 50 kHz echosounder.  Approximately 450,000 seabed classification records were generated from an are of of about 30 sq. miles.  Ten district acoustic classes were identified through an unsupervised classification system using principle component and cluster analyses.  One hundred and sixty-one grab samples and forty-five benthic community composition data samples collected in the study area shortly before and after the seabed classification survey, further refined the ten classes into groups based on grain size.  A preliminary map of surficial grain size of South Bay was developed from the combination of the seabed classification and the grab and benthic samples.  The initial seabed classification map, the grain size map, and locations of sediment samples will be displayed along with the methods of acousitc seabed classification.

  12. Comparative study of initial stages of copper immersion deposition on bulk and porous silicon

    NASA Astrophysics Data System (ADS)

    Bandarenka, Hanna; Prischepa, Sergey L.; Fittipaldi, Rosalba; Vecchione, Antonio; Nenzi, Paolo; Balucani, Marco; Bondarenko, Vitaly

    2013-02-01

    Initial stages of Cu immersion deposition in the presence of hydrofluoric acid on bulk and porous silicon were studied. Cu was found to deposit both on bulk and porous silicon as a layer of nanoparticles which grew according to the Volmer-Weber mechanism. It was revealed that at the initial stages of immersion deposition, Cu nanoparticles consisted of crystals with a maximum size of 10 nm and inherited the orientation of the original silicon substrate. Deposited Cu nanoparticles were found to be partially oxidized to Cu2O while CuO was not detected for all samples. In contrast to porous silicon, the crystal orientation of the original silicon substrate significantly affected the sizes, density, and oxidation level of Cu nanoparticles deposited on bulk silicon.

  13. Structural and dielectric studies of Zr and Co co-substituted Ni0.5Zn0.5Fe2O4 using sol-gel auto combustion method

    NASA Astrophysics Data System (ADS)

    Jalaiah, K.; Vijaya Babu, K.; Rajashekhar Babu, K.; Chandra Mouli, K.

    2018-06-01

    Zr and Co substituted Ni0.5Zn0.5 ZrxCuxFe2-2xO4 with x values varies from the 0.0 to 0.4 in steps of 0.08 wt% ferrites synthesized by using sol-gel auto combustion method. The XRD patterns give evidence for formation of the single phase cubic spinel. The lattice constant was initially decreased from 8.3995 Å to 8.3941 Å with dopant concentration for x = 0.00-0.08 thereafter the lattice parameter steeply increased up to 8.4129 Å fox x = 0.4 with increasing dopant concentration. The estimated crystallite size and measured particle sizes are in comparable nano size. The grain size initially increased 2.3137-3.0430 μm, later it decreased to 2.2952 μm with increasing dopant concentration. The prepared samples porosity shows the opposite trend to grain size. The FT-IR spectrum for prepared samples shows the Fd3m (O7h). The wavenumber for tetrahedral site increased from 579 cm-1 to 593 cm-1 with increasing dopant concentration and the wavenumber of octahedral site are initially decreased from 414 cm-1 to 400 cm-1 for x = 0.00 to x = 0.08 later increased to 422 cm-1 with increasing dopant concentration. The dielectric constant increased from 8.85 to 34.5127 with dopant increasing concentration. The corresponding loss factor was fallows the similar trend as dielectric constant. The AC conductivity increased with increasing dopant concentration from 3.0261 × 10-7 S/m to 4.4169 × 10-6 S/m.

  14. Effect of pectin methylesterase on carrot (Daucus carota) juice cloud stability.

    PubMed

    Schultz, Alison K; Anthon, Gordon E; Dungan, Stephanie R; Barrett, Diane M

    2014-02-05

    To determine the effect of residual enzyme activity on carrot juice cloud, 0 to 1 U/g pectin methylesterase (PME) was added to pasteurized carrot juice. Cloud stability and particle diameters were measured to quantify juice cloud stability and clarification for 56 days of storage. All levels of PME addition resulted in clarification; higher amounts had a modest effect in causing more rapid clarification, due to a faster increase in particle size. The cloud initially exhibited a trimodal distribution of particle sizes. For enzyme-containing samples, particles in the smallest-sized mode initially aggregated to merge with the second peak over 5-10 days. This larger population then continued to aggregate more slowly over longer times. This observation of a more rapid destabilization process initially, followed by slower subsequent changes in the cloud, was also manifested in measurements of sedimentation extent and in turbidity tests. Optical microscopy showed that aggregation created elongated, fractal particle structures over time.

  15. Will Outer Tropical Cyclone Size Change due to Anthropogenic Warming?

    NASA Astrophysics Data System (ADS)

    Schenkel, B. A.; Lin, N.; Chavas, D. R.; Vecchi, G. A.; Knutson, T. R.; Oppenheimer, M.

    2017-12-01

    Prior research has shown significant interbasin and intrabasin variability in outer tropical cyclone (TC) size. Moreover, outer TC size has even been shown to vary substantially over the lifetime of the majority of TCs. However, the factors responsible for both setting initial outer TC size and determining its evolution throughout the TC lifetime remain uncertain. Given these gaps in our physical understanding, there remains uncertainty in how outer TC size will change, if at all, due to anthropogenic warming. The present study seeks to quantify whether outer TC size will change significantly in response to anthropogenic warming using data from a high-resolution global climate model and a regional hurricane model. Similar to prior work, the outer TC size metric used in this study is the radius in which the azimuthal-mean surface azimuthal wind equals 8 m/s. The initial results from the high-resolution global climate model data suggest that the distribution of outer TC size shifts significantly towards larger values in each global TC basin during future climates, as revealed by 1) statistically significant increase of the median outer TC size by 5-10% (p<0.05) according to a 1,000-sample bootstrap resampling approach with replacement and 2) statistically significant differences between distributions of outer TC size from current and future climate simulations as shown using two-sample Kolmogorov Smirnov testing (p<<0.01). Additional analysis of the high-resolution global climate model data reveals that outer TC size does not uniformly increase within each basin in future climates, but rather shows substantial locational dependence. Future work will incorporate the regional mesoscale hurricane model data to help focus on identifying the source of the spatial variability in outer TC size increases within each basin during future climates and, more importantly, why outer TC size changes in response to anthropogenic warming.

  16. Friction Stir Welding of Al Alloy 2219-T8: Part II-Mechanical and Corrosion

    NASA Astrophysics Data System (ADS)

    Kang, Ju; Feng, Zhi-Cao; Li, Ji-Chao; Frankel, G. S.; Wang, Guo-Qing; Wu, Ai-Ping

    2016-09-01

    In Part I of this series, abnormal agglomerations of θ particles with size of about 100 to 1000 µm were observed in friction stir welded AA2219-T8 joints. In this work, the effects of these agglomerated θ particles on the mechanical and corrosion properties of the joints are studied. Tensile testing with in situ SEM imaging was utilized to monitor crack initiation and propagation in base metal and weld nugget zone (WNZ) samples. These tests showed that cracks initiated in the θ particles and at the θ/matrix interfaces, but not in the matrix. The WNZ samples containing abnormal agglomerated θ particles had a similar ultimate tensile stress but 3 pct less elongation than other WNZ samples with only normal θ particles. Measurements using the microcell technique indicated that the agglomerated θ particles acted as a cathode causing the dissolution of adjacent matrix. The abnormal θ particle agglomerations led to more severe localized attack due to the large cathode/anode ratio. Al preferential dissolution occurred in the abnormal θ particle agglomerations, which was different from the corrosion behavior of normal size θ particles.

  17. Drying regimes in homogeneous porous media from macro- to nanoscale

    NASA Astrophysics Data System (ADS)

    Thiery, J.; Rodts, S.; Weitz, D. A.; Coussot, P.

    2017-07-01

    Magnetic resonance imaging visualization down to nanometric liquid films in model porous media with pore sizes from micro- to nanometers enables one to fully characterize the physical mechanisms of drying. For pore size larger than a few tens of nanometers, we identify an initial constant drying rate period, probing homogeneous desaturation, followed by a falling drying rate period. This second period is associated with the development of a gradient in saturation underneath the sample free surface that initiates the inward recession of the contact line. During this latter stage, the drying rate varies in accordance with vapor diffusion through the dry porous region, possibly affected by the Knudsen effect for small pore size. However, we show that for sufficiently small pore size and/or saturation the drying rate is increasingly reduced by the Kelvin effect. Subsequently, we demonstrate that this effect governs the kinetics of evaporation in nanopores as a homogeneous desaturation occurs. Eventually, under our experimental conditions, we show that the saturation unceasingly decreases in a homogeneous manner throughout the wet regions of the medium regardless of pore size or drying regime considered. This finding suggests the existence of continuous liquid flow towards the interface of higher evaporation, down to very low saturation or very small pore size. Paradoxically, even if this net flow is unidirectional and capillary driven, it corresponds to a series of diffused local capillary equilibrations over the full height of the sample, which might explain that a simple Darcy's law model does not predict the effect of scaling of the net flow rate on the pore size observed in our tests.

  18. Structural transformation of crystallized debranched cassava starch during dual hydrothermal treatment in relation to enzyme digestibility.

    PubMed

    Boonna, Sureeporn; Tongta, Sunanta

    2018-07-01

    Structural transformation of crystallized debranched cassava starch prepared by temperature cycling (TC) treatment and then subjected to annealing (ANN), heat-moisture treatment (HMT) and dual hydrothermal treatments of ANN and HMT was investigated. The relative crystallinity, lateral crystal size, melting temperature and resistant starch (RS) content increased for all hydrothermally treated samples, but the slowly digestible starch (SDS) content decreased. The RS content followed the order: HMT → ANN > HMT > ANN → HMT > ANN > TC, respectively. The HMT → ANN sample showed a larger lateral crystal size with more homogeneity, whereas the ANN → HMT sample had a smaller lateral crystal size with a higher melting temperature. After cooking at 50% moisture, the increased RS content of samples was observed, particularly for the ANN → HMT sample. These results suggest that structural changes of crystallized debranched starch during hydrothermal treatments depend on initial crystalline characteristics and treatment sequences, influencing thermal stability, enzyme digestibility, and cooking stability. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Experimental strategies for imaging bioparticles with femtosecond hard X-ray pulses

    PubMed Central

    Okamoto, Kenta; Bielecki, Johan; Maia, Filipe R. N. C.; Mühlig, Kerstin; Seibert, M. Marvin; Hantke, Max F.; Benner, W. Henry; Svenda, Martin; Ekeberg, Tomas; Loh, N. Duane; Pietrini, Alberto; Zani, Alessandro; Rath, Asawari D.; Westphal, Daniel; Kirian, Richard A.; Awel, Salah; Wiedorn, Max O.; van der Schot, Gijs; Carlsson, Gunilla H.; Hasse, Dirk; Sellberg, Jonas A.; Barty, Anton; Andreasson, Jakob; Boutet, Sébastien; Williams, Garth; Koglin, Jason; Hajdu, Janos; Larsson, Daniel S. D.

    2017-01-01

    This study explores the capabilities of the Coherent X-ray Imaging Instrument at the Linac Coherent Light Source to image small biological samples. The weak signal from small samples puts a significant demand on the experiment. Aerosolized Omono River virus particles of ∼40 nm in diameter were injected into the submicrometre X-ray focus at a reduced pressure. Diffraction patterns were recorded on two area detectors. The statistical nature of the measurements from many individual particles provided information about the intensity profile of the X-ray beam, phase variations in the wavefront and the size distribution of the injected particles. The results point to a wider than expected size distribution (from ∼35 to ∼300 nm in diameter). This is likely to be owing to nonvolatile contaminants from larger droplets during aerosolization and droplet evaporation. The results suggest that the concentration of nonvolatile contaminants and the ratio between the volumes of the initial droplet and the sample particles is critical in such studies. The maximum beam intensity in the focus was found to be 1.9 × 1012 photons per µm2 per pulse. The full-width of the focus at half-maximum was estimated to be 500 nm (assuming 20% beamline transmission), and this width is larger than expected. Under these conditions, the diffraction signal from a sample-sized particle remained above the average background to a resolution of 4.25 nm. The results suggest that reducing the size of the initial droplets during aerosolization is necessary to bring small particles into the scope of detailed structural studies with X-ray lasers. PMID:28512572

  20. Socioeconomic status, urbanicity and risk behaviors in Mexican youth: an analysis of three cross-sectional surveys

    PubMed Central

    2011-01-01

    Background The relationship between urbanicity and adolescent health is a critical issue for which little empirical evidence has been reported. Although an association has been suggested, a dichotomous rural versus urban comparison may not succeed in identifying differences between adolescent contexts. This study aims to assess the influence of locality size on risk behaviors in a national sample of young Mexicans living in low-income households, while considering the moderating effect of socioeconomic status (SES). Methods This is a secondary analysis of three national surveys of low-income households in Mexico in different settings: rural, semi-urban and urban areas. We analyzed risk behaviors in 15-21-year-olds and their potential relation to urbanicity. The risk behaviors explored were: tobacco and alcohol consumption, sexual initiation and condom use. The adolescents' localities of residence were classified according to the number of inhabitants in each locality. We used a logistical model to identify an association between locality size and risk behaviors, including an interaction term with SES. Results The final sample included 17,974 adolescents from 704 localities in Mexico. Locality size was associated with tobacco and alcohol consumption, showing a similar effect throughout all SES levels: the larger the size of the locality, the lower the risk of consuming tobacco or alcohol compared with rural settings. The effect of locality size on sexual behavior was more complex. The odds of adolescent condom use were higher in larger localities only among adolescents in the lowest SES levels. We found no statically significant association between locality size and sexual initiation. Conclusions The results suggest that in this sample of adolescents from low-income areas in Mexico, risk behaviors are related to locality size (number of inhabitants). Furthermore, for condom use, this relation is moderated by SES. Such heterogeneity suggests the need for more detailed analyses of both the effects of urbanicity on behavior, and the responses--which are also heterogeneous--required to address this situation. PMID:22129110

  1. Bed-material characteristics of the Sacramento–San Joaquin Delta, California, 2010–13

    USGS Publications Warehouse

    Marineau, Mathieu D.; Wright, Scott A.

    2017-02-10

    The characteristics of bed material at selected sites within the Sacramento–San Joaquin Delta, California, during 2010–13 are described in a study conducted by the U.S. Geological Survey in cooperation with the Bureau of Reclamation. During 2010‒13, six complete sets of samples were collected. Samples were initially collected at 30 sites; however, starting in 2012, samples were collected at 7 additional sites. These sites are generally collocated with an active streamgage. At all but one site, a separate bed-material sample was collected at three locations within the channel (left, right, and center). Bed-material samples were collected using either a US BMH–60 or a US BM–54 (for sites with higher stream velocity) cable-suspended, scoop sampler. Samples from each location were oven-dried and sieved. Bed material finer than 2 millimeters was subsampled using a sieving riffler and processed using a Beckman Coulter LS 13–320 laser diffraction particle-size analyzer. To determine the organic content of the bed material, the loss on ignition method was used for one subsample from each location. Particle-size distributions are presented as cumulative percent finer than a given size. Median and 90th-percentile particle size, and the percentage of subsample mass lost using the loss on ignition method for each sample are also presented in this report.

  2. Angiographic core laboratory reproducibility analyses: implications for planning clinical trials using coronary angiography and left ventriculography end-points.

    PubMed

    Steigen, Terje K; Claudio, Cheryl; Abbott, David; Schulzer, Michael; Burton, Jeff; Tymchak, Wayne; Buller, Christopher E; John Mancini, G B

    2008-06-01

    To assess reproducibility of core laboratory performance and impact on sample size calculations. Little information exists about overall reproducibility of core laboratories in contradistinction to performance of individual technicians. Also, qualitative parameters are being adjudicated increasingly as either primary or secondary end-points. The comparative impact of using diverse indexes on sample sizes has not been previously reported. We compared initial and repeat assessments of five quantitative parameters [e.g., minimum lumen diameter (MLD), ejection fraction (EF), etc.] and six qualitative parameters [e.g., TIMI myocardial perfusion grade (TMPG) or thrombus grade (TTG), etc.], as performed by differing technicians and separated by a year or more. Sample sizes were calculated from these results. TMPG and TTG were also adjudicated by a second core laboratory. MLD and EF were the most reproducible, yielding the smallest sample size calculations, whereas percent diameter stenosis and centerline wall motion require substantially larger trials. Of the qualitative parameters, all except TIMI flow grade gave reproducibility characteristics yielding sample sizes of many 100's of patients. Reproducibility of TMPG and TTG was only moderately good both within and between core laboratories, underscoring an intrinsic difficulty in assessing these. Core laboratories can be shown to provide reproducibility performance that is comparable to performance commonly ascribed to individual technicians. The differences in reproducibility yield huge differences in sample size when comparing quantitative and qualitative parameters. TMPG and TTG are intrinsically difficult to assess and conclusions based on these parameters should arise only from very large trials.

  3. Development and Validation of the Caring Loneliness Scale.

    PubMed

    Karhe, Liisa; Kaunonen, Marja; Koivisto, Anna-Maija

    2016-12-01

    The Caring Loneliness Scale (CARLOS) includes 5 categories derived from earlier qualitative research. This article assesses the reliability and construct validity of a scale designed to measure patient experiences of loneliness in a professional caring relationship. Statistical analysis with 4 different sample sizes included Cronbach's alpha and exploratory factor analysis with principal axis factoring extraction. The sample size of 250 gave the most useful and comprehensible structure, but all 4 samples yielded underlying content of loneliness experiences. The initial 5 categories were reduced to 4 factors with 24 items and Cronbach's alpha ranging from .77 to .90. The findings support the reliability and validity of CARLOS for the assessment of Finnish breast cancer and heart surgery patients' experiences but as all instruments, further validation is needed.

  4. The deformation mechanisms and size effects of single-crystal magnesium

    NASA Astrophysics Data System (ADS)

    Byer, Cynthia M.

    In this work, we seek to understand the deformation mechanisms and size effects of single-crystal magnesium at the micrometer scale through both microcompression experiments and finite element simulations. Microcompression experiments are conducted to investigate the impact of initial dislocation density and orientation on size effects. Micropillars are fabricated using a focused ion beam and tested in a Nanoindenter using a diamond fiat tip as a compression platen. Two different initial dislocation densities are examined for [0001] oriented micropillars. Our results demonstrate that decreasing the initial dislocation density results in an increased size effect in terms of increased strength and stochasticity. Microcompression along the [23¯14] axis results in much lower strengths than for [0001] oriented samples. Post-mortem analysis reveals basal slip in both [0001] and [23¯14] micropillars. The application of a stochastic probability model shows good agreement between theoretical predictions and experimental results for size effects with our values of initial dislocation density and micropillar dimensions. Size effects are then incorporated into a single-crystal plasticity model (modified from Zhang and Joshi [1]) implemented in ABAQUS/STANDARD as a user-material subroutine. The model successfully captures the phenomena typically associated with size effects of increasing stochasticity and strength with decreasing specimen size and also accounts for the changing trends resulting from variations in initial dislocation density that we observe in the experiments. Finally, finite element simulations are performed with the original (traditional, without size effects) crystal plasticity model [1] to investigate the relative activities of the deformation modes of single-crystal magnesium for varying degrees of misalignment in microcompression. The simulations reveal basal activity in all micropillars, even for perfectly aligned compression along the [0001] axis. Pyramidal < c + a > activity dominates until the misalignment increases to 2°, when basal slip takes over as the dominant mode. The stress-strain curves for the case of 0° misalignment agrees well with experimental curves, indicating that good alignment was achieved during the experiments. Through this investigation, we gain a better understanding of how to control the size effects, as well as the deformation mechanisms operating at the small scale in magnesium.

  5. The effect of lesion characteristic on remineralization and model sensitivity.

    PubMed

    Schäfer, F; Raven, S J; Parr, T A

    1992-04-01

    A major criterion for assessing the value of any experimental model in scientific research is the degree of correspondence between its results and data from the real-life process it is designed to model. Intra-oral models aimed at predicting the anti-caries efficacy of toothpastes or other topical treatments should therefore be calibrated against treatments proven to be effective in a caries clinical trial. For this to be achieved, it is necessary that a model with high sensitivity be designed, while at the same time retaining relevance to the process to be modeled. This means that the effects of the various experimental conditions and parameters of the model on its performance must be understood. The purpose of this paper was to assess the influence of two specific factors on the performance of an in situ enamel remineralization model, which is based on human enamel slabs attached to partial dentures. The two factors are initial lesion severity and origin of enamel sample. The results indicated that initial lesion size affected whether net remineralization or net demineralization occurred during in situ treatment. Samples with an initial range of from 1500 to 2500 (delta Z) tended more toward demineralization than did samples with delta Z greater than 3500. This means that treatment groups must be well-balanced with respect to initial lesion size. Differences in initial demineralization severity between different tooth locations must also be considered so that systematic treatment bias can be avoided. The solution used in the model discussed here is based on a balanced experimental design, which allows this effect to be taken into account in the data analysis.

  6. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    USGS Publications Warehouse

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  7. The effect of stress on limestone permeability and its effective stress behavior

    NASA Astrophysics Data System (ADS)

    Meng, F.; Baud, P.; Ge, H.; Wong, T. F.

    2017-12-01

    The evolution of permeability and its effective stress behavior is related to inelastic deformation and failure mode. This was investigated in Indiana and Purbeck limestones with porosities of 18% and 13%, respectively. Hydrostatic and triaxial compression tests were conducted at room temperature on water-saturated samples at pore pressure of 5 MPa and confining pressures up to 90 MPa. Permeability was measured using steady flow at different stages of deformation. For Indiana limestone, under hydrostatic loading pore collapse initiated at critical pressure P* 55 MPa with an accelerated reduction of permeability by 1/2. At a confinement of 35 MPa and above, shear-enhanced compaction initiated at critical stress C*, beyond which permeability reduction up to a factor of 3 was observed. At a confinement of 15 MPa and below, dilatancy initiated at critical stress C', beyond which permeability continued to decrease, with a negative correlation between porosity and permeability changes. Purbeck limestone showed similar evolution of permeability. Microstructural and mercury porosimetry data showed that pore size distribution in both Indiana and Purbeck limestones is bimodal, with significant proportions of macropores and micropores. The effective stress behaviour of a limestone with dual porosity is different from the prediction for a microscopically homogeneous assemblage, in that its effective stress coefficients for permeability and porosity change may attain values significantly >1. Indeed this was confirmed by our measurements (at confining pressures of 7-15 MPa and pore pressures of 1-3 MPa) in samples that had not been deformed inelastically. We also investigated the behavior in samples hydrostatically and triaxially compacted to beyond the critical stresses P* and C*, respectively. Experimental data for these samples consistently showed effective stress coefficients for both permeability and porosity change with values <1. Thus the effective stress behavior in an inelastically compacted sample is fundamentally different, with attributes akin to that of a microscopically homogeneous assemblage. This is likely related to compaction from pervasive collapse of macropores, which would effectively homogenize the initially bimodal pore size distribution.

  8. Orphan therapies: making best use of postmarket data.

    PubMed

    Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling

    2014-08-01

    Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.

  9. Estimation of within-stratum variance for sample allocation: Foreign commodity production forecasting

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)

    1980-01-01

    The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.

  10. The Response of Simple Polymer Structures Under Dynamic Loading

    NASA Astrophysics Data System (ADS)

    Proud, William; Ellison, Kay; Yapp, Su; Cole, Cloe; Galimberti, Stefano; Institute of Shock Physics Team

    2017-06-01

    The dynamic response of polymeric materials has been widely studied with the effects of degree of crystallinity, strain rate, temperature and sample size being commonly reported. This study uses a simple PMMA structure, a right cylindrical sample, with structural features such as holes. The features are added an varied in a systematic fashion. Samples were dynamically loaded using a Split Hopkinson Pressure Bar up to failure. The resulting stress-strain curves are presented showing the change in sample response. The strain to failure is shown to increase initially with the presence of holes, while failure stress is relatively unaffected. The fracture patterns seen in the failed samples change, with tensile cracks, Hertzian cones, shear effects being dominant for different holes sizes and geometries. The sample were prepared by laser cutting and checked for residual stress before experiment. The data is used to validate predictive model predictions where material, structure and damage are included.. The Institute of Shock Physics acknowledges the support of Imperial College London and the Atomic Weapons Establishment.

  11. Air Flow and Pressure Drop Measurements Across Porous Oxides

    NASA Technical Reports Server (NTRS)

    Fox, Dennis S.; Cuy, Michael D.; Werner, Roger A.

    2008-01-01

    This report summarizes the results of air flow tests across eight porous, open cell ceramic oxide samples. During ceramic specimen processing, the porosity was formed using the sacrificial template technique, with two different sizes of polystyrene beads used for the template. The samples were initially supplied with thicknesses ranging from 0.14 to 0.20 in. (0.35 to 0.50 cm) and nonuniform backside morphology (some areas dense, some porous). Samples were therefore ground to a thickness of 0.12 to 0.14 in. (0.30 to 0.35 cm) using dry 120 grit SiC paper. Pressure drop versus air flow is reported. Comparisons of samples with thickness variations are made, as are pressure drop estimates. As the density of the ceramic material increases the maximum corrected flow decreases rapidly. Future sample sets should be supplied with samples of similar thickness and having uniform surface morphology. This would allow a more consistent determination of air flow versus processing parameters and the resulting porosity size and distribution.

  12. Effects of growth rate, size, and light availability on tree survival across life stages: a demographic analysis accounting for missing values and small sample sizes.

    PubMed

    Moustakas, Aristides; Evans, Matthew R

    2015-02-28

    Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.

  13. Importance of size and distribution of Ni nanoparticles for the hydrodeoxygenation of microalgae oil.

    PubMed

    Song, Wenji; Zhao, Chen; Lercher, Johannes A

    2013-07-22

    Improved synthetic approaches for preparing small-sized Ni nanoparticles (d=3 nm) supported on HBEA zeolite have been explored and compared with the traditional impregnation method. The formation of surface nickel silicate/aluminate involved in the two precipitation processes are inferred to lead to the stronger interaction between the metal and the support. The lower Brønsted acid concentrations of these two Ni/HBEA catalysts compared with the parent zeolite caused by the partial exchange of Brønsted acid sites by Ni(2+) cations do not influence the hydrodeoxygenation rates, but alter the product selectivity. Higher initial rates and higher stability have been achieved with these optimized catalysts for the hydrodeoxygenation of stearic acid and microalgae oil. Small metal particles facilitate high initial catalytic activity in the fresh sample and size uniformity ensures high catalyst stability. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Valid approximation of spatially distributed grain size distributions - A priori information encoded to a feedforward network

    NASA Astrophysics Data System (ADS)

    Berthold, T.; Milbradt, P.; Berkhahn, V.

    2018-04-01

    This paper presents a model for the approximation of multiple, spatially distributed grain size distributions based on a feedforward neural network. Since a classical feedforward network does not guarantee to produce valid cumulative distribution functions, a priori information is incor porated into the model by applying weight and architecture constraints. The model is derived in two steps. First, a model is presented that is able to produce a valid distribution function for a single sediment sample. Although initially developed for sediment samples, the model is not limited in its application; it can also be used to approximate any other multimodal continuous distribution function. In the second part, the network is extended in order to capture the spatial variation of the sediment samples that have been obtained from 48 locations in the investigation area. Results show that the model provides an adequate approximation of grain size distributions, satisfying the requirements of a cumulative distribution function.

  15. Field substitution of nonresponders can maintain sample size and structure without altering survey estimates-the experience of the Italian behavioral risk factors surveillance system (PASSI).

    PubMed

    Baldissera, Sandro; Ferrante, Gianluigi; Quarchioni, Elisa; Minardi, Valentina; Possenti, Valentina; Carrozzi, Giuliano; Masocco, Maria; Salmaso, Stefania

    2014-04-01

    Field substitution of nonrespondents can be used to maintain the planned sample size and structure in surveys but may introduce additional bias. Sample weighting is suggested as the preferable alternative; however, limited empirical evidence exists comparing the two methods. We wanted to assess the impact of substitution on surveillance results using data from Progressi delle Aziende Sanitarie per la Salute in Italia-Progress by Local Health Units towards a Healthier Italy (PASSI). PASSI is conducted by Local Health Units (LHUs) through telephone interviews of stratified random samples of residents. Nonrespondents are replaced with substitutes randomly preselected in the same LHU stratum. We compared the weighted estimates obtained in the original PASSI sample (used as a reference) and in the substitutes' sample. The differences were evaluated using a Wald test. In 2011, 50,697 units were selected: 37,252 were from the original sample and 13,445 were substitutes; 37,162 persons were interviewed. The initially planned size and demographic composition were restored. No significant differences in the estimates between the original and the substitutes' sample were found. In our experience, field substitution is an acceptable method for dealing with nonresponse, maintaining the characteristics of the original sample without affecting the results. This evidence can support appropriate decisions about planning and implementing a surveillance system. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. The Influence of Porosity on Fatigue Crack Initiation in Additively Manufactured Titanium Components.

    PubMed

    Tammas-Williams, S; Withers, P J; Todd, I; Prangnell, P B

    2017-08-04

    Without post-manufacture HIPing the fatigue life of electron beam melting (EBM) additively manufactured parts is currently dominated by the presence of porosity, exhibiting large amounts of scatter. Here we have shown that the size and location of these defects is crucial in determining the fatigue life of EBM Ti-6Al-4V samples. X-ray computed tomography has been used to characterise all the pores in fatigue samples prior to testing and to follow the initiation and growth of fatigue cracks. This shows that the initiation stage comprises a large fraction of life (>70%). In these samples the initiating defect was often some way from being the largest (merely within the top 35% of large defects). Using various ranking strategies including a range of parameters, we found that when the proximity to the surface and the pore aspect ratio were included the actual initiating defect was within the top 3% of defects ranked most harmful. This lays the basis for considering how the deposition parameters can be optimised to ensure that the distribution of pores is tailored to the distribution of applied stresses in additively manufactured parts to maximise the fatigue life for a given loading cycle.

  17. Analogical reasoning in amazons.

    PubMed

    Obozova, Tanya; Smirnova, Anna; Zorina, Zoya; Wasserman, Edward

    2015-11-01

    Two juvenile orange-winged amazons (Amazona amazonica) were initially trained to match visual stimuli by color, shape, and number of items, but not by size. After learning these three identity matching-to-sample tasks, the parrots transferred discriminative responding to new stimuli from the same categories that had been used in training (other colors, shapes, and numbers of items) as well as to stimuli from a different category (stimuli varying in size). In the critical testing phase, both parrots exhibited reliable relational matching-to-sample (RMTS) behavior, suggesting that they perceived and compared the relationship between objects in the sample stimulus pair to the relationship between objects in the comparison stimulus pairs, even though no physical matches were possible between items in the sample and comparison pairs. The parrots spontaneously exhibited this higher-order relational responding without having ever before been trained on RMTS tasks, therefore joining apes and crows in displaying this abstract cognitive behavior.

  18. Investigation of Microstructure and Mechanical Properties of ECAP-Processed AM Series Magnesium Alloy

    NASA Astrophysics Data System (ADS)

    Gopi, K. R.; Nayaka, H. Shivananda; Sahu, Sandeep

    2016-09-01

    Magnesium alloy Mg-Al-Mn (AM70) was processed by equal channel angular pressing (ECAP) at 275 °C for up to 4 passes in order to produce ultrafine-grained microstructure and improve its mechanical properties. ECAP-processed samples were characterized for microstructural analysis using optical microscopy, scanning electron microscopy, and transmission electron microscopy. Microstructural analysis showed that, with an increase in the number of ECAP passes, grains refined and grain size reduced from an average of 45 to 1 µm. Electron backscatter diffraction analysis showed the transition from low angle grain boundaries to high angle grain boundaries in ECAP 4 pass sample as compared to as-cast sample. The strength and hardness values an showed increasing trend for the initial 2 passes of ECAP processing and then started decreasing with further increase in the number of ECAP passes, even though the grain size continued to decrease in all the successive ECAP passes. However, the strength and hardness values still remained quite high when compared to the initial condition. This behavior was found to be correlated with texture modification in the material as a result of ECAP processing.

  19. The effect of anti-phase domain size on the ductility of a rapidly solidified Ni3Al-Cr alloy

    NASA Technical Reports Server (NTRS)

    Carro, G.; Bertero, G. A.; Wittig, J. E.; Flanagan, W. F.

    1989-01-01

    Tensile tests on splat-quenched Ni3Al-Cr alloys showed a sharp decrease in ductility with long-time annealing. The growth of the initially very-fine-size anti-phase domains showed a tenuous correlation with ductility up to a critical size, where ductility was lost. The grain size was relatively unaffected by these annealing treatments, but the grain-boundary curvature decreased, implying less toughness. An important observation was that, for the longest annealing time, a chromium-rich precipitate formed, which the data indicate could be a boride. Miniaturized tensile tests were performed on samples which were all obtained from the same splat-quenched foil, and the various domain sizes were controlled by subsequent annealing treatments.

  20. Composition and Morphology of Major Particle Types from Airborne Measurements during ICE-T and PRADACS Field Studies

    NASA Astrophysics Data System (ADS)

    Venero, I. M.; Mayol-Bracero, O. L.; Anderson, J. R.

    2012-12-01

    As part of the Puerto Rican African Dust and Cloud Study (PRADACS) and the Ice in Clouds Experiment - Tropical (ICE-T), we sampled giant airborne particles to study their elemental composition, morphology, and size distributions. Samples were collected in July 2011 during field measurements performed by NCAR's C-130 aircraft based on St Croix, U.S Virgin Island. The results presented here correspond to the measurements done during research flight #8 (RF8). Aerosol particles with Dp > 1 um were sampled with the Giant Nuclei Impactor and particles with Dp < 1 um were collected with the Wyoming Inlet. Collected particles were later analyzed using an automated scanning electron microscope (SEM) and manual observation by field emission SEM. We identified the chemical composition and morphology of major particle types in filter samples collected at different altitudes (e.g., 300 ft, 1000 ft, and 4500ft). Results from the flight upwind of Puerto Rico show that particles in the giant nuclei size range are dominated by sea salt. Samples collected at altitudes 300 ft and 1000 ft showed the highest number of sea salt particles and the samples collected at higher altitudes (> 4000 ft) showed the highest concentrations of clay material. HYSPLIT back trajectories for all samples showed that the low altitude samples initiated in the free troposphere in the Atlantic Ocean, which may account for the high sea salt content and that the source of the high altitude samples was closer to the Saharan - Sahel desert region and, therefore, these samples possibly had the influence of African dust. Size distribution results for quartz and unreacted sea-salt aerosols collected on the Giant Nuclei Impactor showed that sample RF08 - 12:05 UTM (300 ft) had the largest size value (mean = 2.936 μm) than all the other samples. Additional information was also obtained from the Wyoming Inlet present at the C - 130 aircraft which showed that size distribution results for all particles were smaller in size. The different mineral components of the dust have different size distributions so that a fractionation process could occur during transport. Also, the presence of supermicron sea salt at altitude is important for cloud processes.

  1. Predicting Receptive-Expressive Vocabulary Discrepancies in Preschool Children With Autism Spectrum Disorder.

    PubMed

    McDaniel, Jena; Yoder, Paul; Woynaroski, Tiffany; Watson, Linda R

    2018-05-15

    Correlates of receptive-expressive vocabulary size discrepancies may provide insights into why language development in children with autism spectrum disorder (ASD) deviates from typical language development and ultimately improve intervention outcomes. We indexed receptive-expressive vocabulary size discrepancies of 65 initially preverbal children with ASD (20-48 months) to a comparison sample from the MacArthur-Bates Communicative Development Inventories Wordbank (Frank, Braginsky, Yurovsky, & Marchman, 2017) to quantify typicality. We then tested whether attention toward a speaker and oral motor performance predict typicality of the discrepancy 8 months later. Attention toward a speaker correlated positively with receptive-expressive vocabulary size discrepancy typicality. Imitative and nonimitative oral motor performance were not significant predictors of vocabulary size discrepancy typicality. Secondary analyses indicated that midpoint receptive vocabulary size mediated the association between initial attention toward a speaker and end point receptive-expressive vocabulary size discrepancy typicality. Findings support the hypothesis that variation in attention toward a speaker might partially explain receptive-expressive vocabulary size discrepancy magnitude in children with ASD. Results are consistent with an input-processing deficit explanation of language impairment in this clinical population. Future studies should test whether attention toward a speaker is malleable and causally related to receptive-expressive discrepancies in children with ASD.

  2. Environmental Sustainability Change Management in SMEs: Learning from Sustainability Champions

    ERIC Educational Resources Information Center

    Chadee, Doren; Wiesner, Retha; Roxas, Banjo

    2011-01-01

    This study identifies the change management processes involved in undertaking environmental sustainability (ES) initiatives within Small and Medium Size Enterprises (SMEs) and relate these to the main attributes of learning organisations. Using case study techniques, the study draws from the change management experiences of a sample of 12 ES…

  3. Class Extraction and Classification Accuracy in Latent Class Models

    ERIC Educational Resources Information Center

    Wu, Qiong

    2009-01-01

    Despite the increasing popularity of latent class models (LCM) in educational research, methodological studies have not yet accumulated much information on the appropriate application of this modeling technique, especially with regard to requirement on sample size and number of indicators. This dissertation study represented an initial attempt to…

  4. Early lexical characteristics of toddlers with cleft lip and palate.

    PubMed

    Hardin-Jones, Mary; Chapman, Kathy L

    2014-11-01

    Objective : To examine development of early expressive lexicons in toddlers with cleft palate to determine whether they differ from those of noncleft toddlers in terms of size and lexical selectivity. Design : Retrospective. Patients : A total of 37 toddlers with cleft palate and 22 noncleft toddlers. Main Outcome Measures : The groups were compared for size of expressive lexicon reported on the MacArthur Communicative Development Inventory and the percentage of words beginning with obstruents and sonorants produced in a language sample. Differences between groups in the percentage of word initial consonants correct on the language sample were also examined. Results : Although expressive vocabulary was comparable at 13 months of age for both groups, size of the lexicon for the cleft group was significantly smaller than that for the noncleft group at 21 and 27 months of age. Toddlers with cleft palate produced significantly more words beginning with sonorants and fewer words beginning with obstruents in their spontaneous speech samples. They were also less accurate when producing word initial obstruents compared with the noncleft group. Conclusions : Toddlers with cleft palate demonstrate a slower rate of lexical development compared with their noncleft peers. The preference that toddlers with cleft palate demonstrate for words beginning with sonorants could suggest they are selecting words that begin with consonants that are easier for them to produce. An alternative explanation might be that because these children are less accurate in the production of obstruent consonants, listeners may not always identify obstruents when they occur.

  5. Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor

    NASA Technical Reports Server (NTRS)

    Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)

    1980-01-01

    The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.

  6. Effect of particle size of Martian dust on the degradation of photovoltaic cell performance

    NASA Technical Reports Server (NTRS)

    Gaier, James R.; Perez-Davis, Marla E.

    1991-01-01

    Glass coverglass and SiO2 covered and uncovered silicon photovoltaic (PV) cells were subjected to conditions simulating a Mars dust storm, using the Martian Surface Wind Tunnel, to assess the effect of particle size on the performance of PV cells in the Martian environment. The dust used was an artificial mineral of the approximate elemental composition of Martian soil, which was sorted into four different size ranges. Samples were tested both initially clean and initially dusted. The samples were exposed to clear and dust laden winds, wind velocities varying from 23 to 116 m/s, and attack angles from 0 to 90 deg. It was found that transmittance through the coverglass approximates the power produced by a dusty PV cell. Occultation by the dust was found to dominate the performance degradation for wind velocities below 50 m/s, whereas abrasion dominates the degradation at wind velocities above 85 m/s. Occultation is most severe at 0 deg (parallel to the wind), is less pronounced from 22.5 to 67.5 deg, and is somewhat larger at 90 deg (perpendicular to the wind). Abrasion is negligible at 0 deg, and increases to a maximum at 90 deg. Occultation is more of a problem with small particles, whereas large particles (unless they are agglomerates) cause more abrasion.

  7. Lowering sample size in comparative analyses can indicate a correlation where there is none: example from Rensch's rule in primates.

    PubMed

    Lindenfors, P; Tullberg, B S

    2006-07-01

    The fact that characters may co-vary in organism groups because of shared ancestry and not always because of functional correlations was the initial rationale for developing phylogenetic comparative methods. Here we point out a case where similarity due to shared ancestry can produce an undesired effect when conducting an independent contrasts analysis. Under special circumstances, using a low sample size will produce results indicating an evolutionary correlation between characters where an analysis of the same pattern utilizing a larger sample size will show that this correlation does not exist. This is the opposite effect of increased sample size to that expected; normally an increased sample size increases the chance of finding a correlation. The situation where the problem occurs is when co-variation between the two continuous characters analysed is clumped in clades; e.g. when some phylogenetically conservative factors affect both characters simultaneously. In such a case, the correlation between the two characters becomes contingent on the number of clades sharing this conservative factor that are included in the analysis, in relation to the number of species contained within these clades. Removing species scattered evenly over the phylogeny will in this case remove the exact variation that diffuses the evolutionary correlation between the two characters - the variation contained within the clades sharing the conservative factor. We exemplify this problem by discussing a parallel in nature where the described problem may be of importance. This concerns the question of the presence or absence of Rensch's rule in primates.

  8. Effect of soil texture and chemical properties on laboratory-generated dust emissions from SW North America

    NASA Astrophysics Data System (ADS)

    Mockford, T.; Zobeck, T. M.; Lee, J. A.; Gill, T. E.; Dominguez, M. A.; Peinado, P.

    2012-12-01

    Understanding the controls of mineral dust emissions and their particle size distributions during wind-erosion events is critical as dust particles play a significant impact in shaping the earth's climate. It has been suggested that emission rates and particle size distributions are independent of soil chemistry and soil texture. In this study, 45 samples of wind-erodible surface soils from the Southern High Plains and Chihuahuan Desert regions of Texas, New Mexico, Colorado and Chihuahua were analyzed by the Lubbock Dust Generation, Analysis and Sampling System (LDGASS) and a Beckman-Coulter particle multisizer. The LDGASS created dust emissions in a controlled laboratory setting using a rotating arm which allows particle collisions. The emitted dust was transferred to a chamber where particulate matter concentration was recorded using a DataRam and MiniVol filter and dust particle size distribution was recorded using a GRIMM particle analyzer. Particle size analysis was also determined from samples deposited on the Mini-Vol filters using a Beckman-Coulter particle multisizer. Soil textures of source samples ranged from sands and sandy loams to clays and silts. Initial results suggest that total dust emissions increased with increasing soil clay and silt content and decreased with increasing sand content. Particle size distribution analysis showed a similar relationship; soils with high silt content produced the widest range of dust particle sizes and the smallest dust particles. Sand grains seem to produce the largest dust particles. Chemical control of dust emissions by calcium carbonate content will also be discussed.

  9. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices.

    PubMed

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information.

  10. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices

    PubMed Central

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information. PMID:27907188

  11. Alpha spectrometric characterization of process-related particle size distributions from active particle sampling at the Los Alamos National Laboratory uranium foundry

    NASA Astrophysics Data System (ADS)

    Plionis, A. A.; Peterson, D. S.; Tandon, L.; LaMont, S. P.

    2010-03-01

    Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid non-distructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.

  12. Rationale and design of the IMPACT EU-trial: improve management of heart failure with procalcitonin biomarkers in cardiology (BIC)-18.

    PubMed

    Möckel, Martin; Slagman, Anna; Vollert, Jörn Ole; Ebmeyer, Stefan; Wiemer, Jan C; Searle, Julia; Giannitsis, Evangelos; Kellum, John A; Maisel, Alan

    2018-02-01

    To evaluate the effectiveness of procalcitonin (PCT)-guided antibiotic treatment compared to current treatment practice to reduce 90-day all-cause mortality in emergency patients with shortness of breath (SOB) and suspected acute heart failure (AHF). Concomitant AHF and lower respiratory tract (or other bacterial) infection in emergency patients with dyspnea are common and can be difficult to diagnose. Early and adequate initiation of antibiotic therapy (ABX) significantly improves patient outcome, but superfluous prescription of ABX maybe harmful. In a multicentre, prospective, randomized, controlled process trial with an open intervention, adult emergency patients with SOB and increased levels of natriuretic peptides will be randomized to either a standard care group or a PCT-guided group with respect to the initiation of antibiotic treatment. In the PCT-guided group, the initiation of antibiotic therapy is based on the results of acute PCT measurements at admission, using a cut-off of 0.2 ng/ml. A two-stage sample-size adaptive design is used; an interim analysis was done after completion of 50% of patients and the final sample size remained unchanged. Primary endpoint is 90-day all-cause mortality. The current study will provide evidence, whether the routine use of PCT in patients with suspected AHF improves outcome.

  13. Stability of bulk Ba2YCu3O(7-x) in a variety of environments

    NASA Technical Reports Server (NTRS)

    Gaier, James R.; Hepp, Aloysius F.; Curtis, Henry B.; Schupp, Donald A.; Hambourger, Paul D.; Blue, James W.

    1988-01-01

    Small bars of ceramic Ba2YCu3O(7-x) were fabricated and subjected to environments similar to those that might be encountered during some NASA missions. These conditions include ambient conditions, high humidity, vacuum, and high fluences of electrons and protrons. The normal state resistivity or critical current density (J sub c) were monitored during these tests to assess the stability of the material. When normal state resistivity is used as a criterion, the ambient stability of these samples was relatively good, exhibiting only a 2 percent degradation over a 3 month period. The humidity stability was shown to be very poor, and to be a steep function of temperature. Samples stored at 50 C for 40 min increased in normal state resistivity by four orders of magnitude. Kinetic analysis indicates that the degradation reaction is second order with water vapor concentration. It is suspected that humidity degradation also accounts for the ambient instability. The samples were stable to vacuum over a period of at least 3 months. Degradation of J sub c in a 1 MeV electron fluence of 9.7 x 10 to the 14th e(-)/sq cm was determined to be no more than about 2 percent. Degradation of J sub c in a 8.7 x 10 to the 14th p(+)/sq cm of 42 MeV protons was found to be grain size dependent. Samples with smaller grain size and initial J sub c of about 240 A/sq cm showed no degradation. while that with larger grain size and an initial J sub c of about 30 A/sq cm degraded to 37 percent of its original value.

  14. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    NASA Astrophysics Data System (ADS)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling curves before explaining differences in diversity.

  15. An analysis of respondent driven sampling with Injection Drug Users (IDU) in Albania and the Russian Federation.

    PubMed

    Stormer, Ame; Tun, Waimar; Guli, Lisa; Harxhi, Arjan; Bodanovskaia, Zinaida; Yakovleva, Anna; Rusakova, Maia; Levina, Olga; Bani, Roland; Rjepaj, Klodian; Bino, Silva

    2006-11-01

    Injection drug users in Tirana, Albania and St. Petersburg, Russia were recruited into a study assessing HIV-related behaviors and HIV serostatus using Respondent Driven Sampling (RDS), a peer-driven recruitment sampling strategy that results in a probability sample. (Salganik M, Heckathorn DD. Sampling and estimation in hidden populations using respondent-driven sampling. Sociol Method. 2004;34:193-239). This paper presents a comparison of RDS implementation, findings on network and recruitment characteristics, and lessons learned. Initiated with 13 to 15 seeds, approximately 200 IDUs were recruited within 8 weeks. Information resulting from RDS indicates that social network patterns from the two studies differ greatly. Female IDUs in Tirana had smaller network sizes than male IDUs, unlike in St. Petersburg where female IDUs had larger network sizes than male IDUs. Recruitment patterns in each country also differed by demographic categories. Recruitment analyses indicate that IDUs form socially distinct groups by sex in Tirana, whereas there was a greater degree of gender mixing patterns in St. Petersburg. RDS proved to be an effective means of surveying these hard-to-reach populations.

  16. Supervised classification in the presence of misclassified training data: a Monte Carlo simulation study in the three group case.

    PubMed

    Bolin, Jocelyn Holden; Finch, W Holmes

    2014-01-01

    Statistical classification of phenomena into observed groups is very common in the social and behavioral sciences. Statistical classification methods, however, are affected by the characteristics of the data under study. Statistical classification can be further complicated by initial misclassification of the observed groups. The purpose of this study is to investigate the impact of initial training data misclassification on several statistical classification and data mining techniques. Misclassification conditions in the three group case will be simulated and results will be presented in terms of overall as well as subgroup classification accuracy. Results show decreased classification accuracy as sample size, group separation and group size ratio decrease and as misclassification percentage increases with random forests demonstrating the highest accuracy across conditions.

  17. Updating histological data on crown initiation and crown completion ages in southern Africans.

    PubMed

    Reid, Donald J; Guatelli-Steinberg, Debbie

    2017-04-01

    To update histological data on crown initiation and completion ages in southern Africans. To evaluate implications of these data for studies that: (a) rely on these data to time linear enamel hypoplasias (LEHs), or, (b) use these data for comparison to fossil hominins. Initiation ages were calculated on 67 histological sections from southern Africans, with sample sizes ranging from one to 11 per tooth type. Crown completion ages for southern Africans were calculated in two ways. First, actual derived initiation ages were added to crown formation times for each histological section to obtain direct information on the crown completion ages of individuals. Second, average initiation ages from this study were added to average crown formation times of southern Africans from the Reid and coworkers previous studies that were based on larger samples. For earlier-initiating tooth types (all anterior teeth and first molars), there is little difference in ages of initiation and crown completion between this and previous studies. Differences increase as a function of initiation age, such that the greatest differences between this and previous studies for both initiation and crown completion ages are for the second and third molars. This study documents variation in initiation ages, particularly for later-initiating tooth types. It upholds the use of previously published histological aging charts for LEHs on anterior teeth. However, this study finds that ages of crown initiation and completion in second and third molars for this southern African sample are earlier than previously estimated. These earlier ages reduce differences between modern humans and fossil hominins for these developmental events in second and third molars. © 2017 Wiley Periodicals, Inc.

  18. Correlates of Sexual Abuse and Smoking among French Adults

    ERIC Educational Resources Information Center

    King, Gary; Guilbert, Philippe; Ward, D. Gant; Arwidson, Pierre; Noubary, Farzad

    2006-01-01

    Objective: The goal of this study was to examine the association between sexual abuse (SA) and initiation, cessation, and current cigarette smoking among a large representative adult population in France. Method: A random sample size of 12,256 adults (18-75 years of age) was interviewed by telephone concerning demographic variables, health…

  19. Support for the initial attachment, growth and differentiation of MG-63 cells: a comparison between nano-size hydroxyapatite and micro-size hydroxyapatite in composites

    PubMed Central

    Filová, Elena; Suchý, Tomáš; Sucharda, Zbyněk; Šupová, Monika; Žaloudková, Margit; Balík, Karel; Lisá, Věra; Šlouf, Miroslav; Bačáková, Lucie

    2014-01-01

    Hydroxyapatite (HA) is considered to be a bioactive material that favorably influences the adhesion, growth, and osteogenic differentiation of osteoblasts. To optimize the cell response on the hydroxyapatite composite, it is desirable to assess the optimum concentration and also the optimum particle size. The aim of our study was to prepare composite materials made of polydimethylsiloxane, polyamide, and nano-sized (N) or micro-sized (M) HA, with an HA content of 0%, 2%, 5%, 10%, 15%, 20%, 25% (v/v) (referred to as N0–N25 or M0–M25), and to evaluate them in vitro in cultures with human osteoblast-like MG-63 cells. For clinical applications, fast osseointegration of the implant into the bone is essential. We observed the greatest initial cell adhesion on composites M10 and N5. Nano-sized HA supported cell growth, especially during the first 3 days of culture. On composites with micro-size HA (2%–15%), MG-63 cells reached the highest densities on day 7. Samples M20 and M25, however, were toxic for MG-63 cells, although these composites supported the production of osteocalcin in these cells. On N2, a higher concentration of osteopontin was found in MG-63 cells. For biomedical applications, the concentration range of 5%–15% (v/v) nano-size or micro-size HA seems to be optimum. PMID:25125978

  20. Setting health research priorities using the CHNRI method: VI. Quantitative properties of human collective opinion

    PubMed Central

    Yoshida, Sachiyo; Rudan, Igor; Cousens, Simon

    2016-01-01

    Introduction Crowdsourcing has become an increasingly important tool to address many problems – from government elections in democracies, stock market prices, to modern online tools such as TripAdvisor or Internet Movie Database (IMDB). The CHNRI method (the acronym for the Child Health and Nutrition Research Initiative) for setting health research priorities has crowdsourcing as the major component, which it uses to generate, assess and prioritize between many competing health research ideas. Methods We conducted a series of analyses using data from a group of 91 scorers to explore the quantitative properties of their collective opinion. We were interested in the stability of their collective opinion as the sample size increases from 15 to 90. From a pool of 91 scorers who took part in a previous CHNRI exercise, we used sampling with replacement to generate multiple random samples of different size. First, for each sample generated, we identified the top 20 ranked research ideas, among 205 that were proposed and scored, and calculated the concordance with the ranking generated by the 91 original scorers. Second, we used rank correlation coefficients to compare the ranks assigned to all 205 proposed research ideas when samples of different size are used. We also analysed the original pool of 91 scorers to to look for evidence of scoring variations based on scorers' characteristics. Results The sample sizes investigated ranged from 15 to 90. The concordance for the top 20 scored research ideas increased with sample sizes up to about 55 experts. At this point, the median level of concordance stabilized at 15/20 top ranked questions (75%), with the interquartile range also generally stable (14–16). There was little further increase in overlap when the sample size increased from 55 to 90. When analysing the ranking of all 205 ideas, the rank correlation coefficient increased as the sample size increased, with a median correlation of 0.95 reached at the sample size of 45 experts (median of the rank correlation coefficient = 0.95; IQR 0.94–0.96). Conclusions Our analyses suggest that the collective opinion of an expert group on a large number of research ideas, expressed through categorical variables (Yes/No/Not Sure/Don't know), stabilises relatively quickly in terms of identifying the ideas that have most support. In the exercise we found a high degree of reproducibility of the identified research priorities was achieved with as few as 45–55 experts. PMID:27350874

  1. Setting health research priorities using the CHNRI method: VI. Quantitative properties of human collective opinion.

    PubMed

    Yoshida, Sachiyo; Rudan, Igor; Cousens, Simon

    2016-06-01

    Crowdsourcing has become an increasingly important tool to address many problems - from government elections in democracies, stock market prices, to modern online tools such as TripAdvisor or Internet Movie Database (IMDB). The CHNRI method (the acronym for the Child Health and Nutrition Research Initiative) for setting health research priorities has crowdsourcing as the major component, which it uses to generate, assess and prioritize between many competing health research ideas. We conducted a series of analyses using data from a group of 91 scorers to explore the quantitative properties of their collective opinion. We were interested in the stability of their collective opinion as the sample size increases from 15 to 90. From a pool of 91 scorers who took part in a previous CHNRI exercise, we used sampling with replacement to generate multiple random samples of different size. First, for each sample generated, we identified the top 20 ranked research ideas, among 205 that were proposed and scored, and calculated the concordance with the ranking generated by the 91 original scorers. Second, we used rank correlation coefficients to compare the ranks assigned to all 205 proposed research ideas when samples of different size are used. We also analysed the original pool of 91 scorers to to look for evidence of scoring variations based on scorers' characteristics. The sample sizes investigated ranged from 15 to 90. The concordance for the top 20 scored research ideas increased with sample sizes up to about 55 experts. At this point, the median level of concordance stabilized at 15/20 top ranked questions (75%), with the interquartile range also generally stable (14-16). There was little further increase in overlap when the sample size increased from 55 to 90. When analysing the ranking of all 205 ideas, the rank correlation coefficient increased as the sample size increased, with a median correlation of 0.95 reached at the sample size of 45 experts (median of the rank correlation coefficient = 0.95; IQR 0.94-0.96). Our analyses suggest that the collective opinion of an expert group on a large number of research ideas, expressed through categorical variables (Yes/No/Not Sure/Don't know), stabilises relatively quickly in terms of identifying the ideas that have most support. In the exercise we found a high degree of reproducibility of the identified research priorities was achieved with as few as 45-55 experts.

  2. Synthesis and characterization of nanocrystalline Co-Fe-Nb-Ta-B alloy

    NASA Astrophysics Data System (ADS)

    Raanaei, Hossein; Fakhraee, Morteza

    2017-09-01

    In this research work, structural and magnetic evolution of Co57Fe13Nb8Ta4B18 alloy, during mechanical alloying process, have been investigated by using, X-ray diffraction, scanning electron microscopy, transmission electron microscopy, electron dispersive X-ray spectroscopy, differential thermal analysis and also vibrating sample magnetometer. It is observed that at 120 milling time, the crystallite size reaches to about 7.8 nm. Structural analyses show that, the solid solution of the initial powder mixture occurs at160 h milling time. The coercivity behavior demonstrates a rise, up to 70 h followed by decreasing tendency up to final stage of milling process. Thermal analysis of 160 h milling time sample reveals two endothermic peaks. The characterization of annealed milled sample for 160 h milling time at 427 °C shows crystallite size growth accompanied by increasing in saturation magnetization.

  3. Variation in aluminum, iron, and particle concentrations in oxic ground-water samples collected by use of tangential-flow ultrafiltration with low-flow sampling

    USGS Publications Warehouse

    Szabo, Z.; Oden, J.H.; Gibs, J.; Rice, D.E.; Ding, Y.; ,

    2001-01-01

    Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering. Variations in concentrations aluminum and iron (1 -74 and 1-199 ug/L (micrograms per liter), respectively), common indicators of the presence of particulate-borne trace elements, were greatest in sample sets from individual wells with the greatest variations in turbidity and particle concentration. Differences in trace-element concentrations in sequentially collected unfiltered samples with variable turbidity were 5 to 10 times as great as those in concurrently collected samples that were passed through various filters. These results indicate that turbidity must be both reduced and stabilized even when low-flow sample-collection techniques are used in order to obtain water samples that do not contain considerable particulate artifacts. Currently (2001) available techniques need to be refined to ensure that the measured trace-element concentrations are representative of those that are mobile in the aquifer water.

  4. Ion mobility analysis of lipoproteins

    DOEpatents

    Benner, W Henry [Danville, CA; Krauss, Ronald M [Berkeley, CA; Blanche, Patricia J [Berkeley, CA

    2007-08-21

    A medical diagnostic method and instrumentation system for analyzing noncovalently bonded agglomerated biological particles is described. The method and system comprises: a method of preparation for the biological particles; an electrospray generator; an alpha particle radiation source; a differential mobility analyzer; a particle counter; and data acquisition and analysis means. The medical device is useful for the assessment of human diseases, such as cardiac disease risk and hyperlipidemia, by rapid quantitative analysis of lipoprotein fraction densities. Initially, purification procedures are described to reduce an initial blood sample to an analytical input to the instrument. The measured sizes from the analytical sample are correlated with densities, resulting in a spectrum of lipoprotein densities. The lipoprotein density distribution can then be used to characterize cardiac and other lipid-related health risks.

  5. Aerosol preparation of intact lipoproteins

    DOEpatents

    Benner, W Henry [Danville, CA; Krauss, Ronald M [Berkeley, CA; Blanche, Patricia J [Berkeley, CA

    2012-01-17

    A medical diagnostic method and instrumentation system for analyzing noncovalently bonded agglomerated biological particles is described. The method and system comprises: a method of preparation for the biological particles; an electrospray generator; an alpha particle radiation source; a differential mobility analyzer; a particle counter; and data acquisition and analysis means. The medical device is useful for the assessment of human diseases, such as cardiac disease risk and hyperlipidemia, by rapid quantitative analysis of lipoprotein fraction densities. Initially, purification procedures are described to reduce an initial blood sample to an analytical input to the instrument. The measured sizes from the analytical sample are correlated with densities, resulting in a spectrum of lipoprotein densities. The lipoprotein density distribution can then be used to characterize cardiac and other lipid-related health risks.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuang, C; Artaxo, P; Martin, S

    Aerosol nucleation and initial growth were investigated during the Green Ocean Amazon (GoAmazon) 2014/15 campaign. Aerosol sampling occurred during the wet and dry seasons of 2014, and took place at the T3 measurement site, downwind of the city of Manaus, Brazil. Characterization of the aerosol size distribution from 10 to 500 nm was accomplished through the deployment of a conventional Scanning Mobility Particle Spectrometer (SMPS) and a fine condensation particle counter (> 10 nm). In order to directly measure aerosol nucleation and initial growth, a Nano SMPS (1.5-20 nm) was also deployed, consisting of a condensation particle counter-based electrical mobilitymore » spectrometer that was modified for the detection of sub-3 nm aerosol. Measurements of the aerosol size distribution from 1.5 nm to 10 nm were obtained during the first observational period, and from 3 nm to 15 nm during the second observational period. Routine, stable measurement in this size range was complicated due to persistent water condensation in the Nano SMPS and diffusional transport losses« less

  7. On the adsorption properties of magnetic fluids: Impact of bulk structure

    NASA Astrophysics Data System (ADS)

    Kubovcikova, Martina; Gapon, Igor V.; Zavisova, Vlasta; Koneracka, Martina; Petrenko, Viktor I.; Soltwedel, Olaf; Almasy, László; Avdeev, Mikhail V.; Kopcansky, Peter

    2017-04-01

    Adsorption of nanoparticles from magnetic fluids (MFs) on solid surface (crystalline silicon) was studied by neutron reflectometry (NR) and related to the bulk structural organization of MFs concluded from small-angle neutron scattering (SANS). The initial aqueous MF with nanomagnetite (co-precipitation reaction) stabilized by sodium oleate and MF modified by a biocompatible polymer, poly(ethylene glycol) (PEG), were considered. Regarding the bulk structure it was confirmed in the SANS experiment that comparatively small and compact (size 30 nm) aggregates of nanoparticle in the initial sample transfer to large and developed (size>130 nm, fractal dimension 2.7) associates in the PEG modified MF. This reorganization in the aggregates correlates with the changes in the neutron reflectivity that showed that a single adsorption layer of individual nanoparticles on the oxidized silicon surface for the initial MF disappears after the PEG modification. It is concluded that all particles in the modified fluid are in the aggregates that are not adsorbed by silicon.

  8. Procedures for analysis of debris relative to Space Shuttle systems

    NASA Technical Reports Server (NTRS)

    Kim, Hae Soo; Cummings, Virginia J.

    1993-01-01

    Debris samples collected from various Space Shuttle systems have been submitted to the Microchemical Analysis Branch. This investigation was initiated to develop optimal techniques for the analysis of debris. Optical microscopy provides information about the morphology and size of crystallites, particle sizes, amorphous phases, glass phases, and poorly crystallized materials. Scanning electron microscopy with energy dispersive spectrometry is utilized for information on surface morphology and qualitative elemental content of debris. Analytical electron microscopy with wavelength dispersive spectrometry provides information on the quantitative elemental content of debris.

  9. Estimation of Length-Scales in Soils by MRI

    NASA Technical Reports Server (NTRS)

    Daidzic, N. E.; Altobelli, S.; Alexander, J. I. D.

    2004-01-01

    Soil can be best described as an unconsolidated granular media that forms porous structure. The present macroscopic theory of water transport in porous media rests upon the continuum hypothesis that the physical properties of porous media can be associated with continuous, twice-differentiable field variables whose spatial domain is a set of centroids of Representative Elementary Volume (REV) elements. MRI is an ideal technique to estimate various length-scales in porous media. A 0.267 T permanent magnet at NASA GRC was used for this study. A 2D or 3D spatially-resolved porosity distribution were obtained from the NMR signal strength from each voxel and the spin-lattice relaxation time. A classical spin-warp imaging with Multiple Spin Echos (MSE) was used to evaluate proton density in each voxel. Initial resolution of 256 x 256 was subsequently reduced by averaging neighboring voxels and the porosity convergence was observed. A number of engineered "space candidate" soils such as Isolite(trademark), Zeoponics(trademark), Turface(trademark), and Profile(trademark) were used. Glass beads in the size range between 50 microns to 2 mm were used as well. Initial results with saturated porous samples have shown a good estimate of the average porosity consistent with the gravimetric porosity measurement results. For Profile(trademark) samples with particle sizes ranging between 0.25 to 1 mm and characteristic interparticle pore size of 100 microns the characteristic Darcy scale was estimated to be about delta(sub REV) = 10 mm. Glass beads porosity show clear convergence toward a definite REV which stays constant throughout homogeneous sample. Additional information is included in the original extended abstract.

  10. Wavelength-dependent backscattering measurements for quantitative real-time monitoring of apoptosis in living cells

    NASA Astrophysics Data System (ADS)

    Mulvey, Christine S.; Sherwood, Carly A.; Bigio, Irving J.

    2009-11-01

    Apoptosis-programmed cell death-is a cellular process exhibiting distinct biochemical and morphological changes. An understanding of the early morphological changes that a cell undergoes during apoptosis can provide the opportunity to monitor apoptosis in tissue, yielding diagnostic and prognostic information. There is avid interest regarding the involvement of apoptosis in cancer. The initial response of a tumor to successful cancer treatment is often massive apoptosis. Current apoptosis detection methods require cell culture disruption. Our aim is to develop a nondisruptive optical method to monitor apoptosis in living cells and tissues. This would allow for real-time evaluation of apoptotic progression of the same cell culture over time without alteration. Elastic scattering spectroscopy (ESS) is used to monitor changes in light-scattering properties of cells in vitro due to apoptotic morphology changes. We develop a simple instrument capable of wavelength-resolved ESS measurements from cell cultures in the backward direction. Using Mie theory, we also develop an algorithm that extracts the size distribution of scatterers in the sample. The instrument and algorithm are validated with microsphere suspensions. For cell studies, Chinese hamster ovary (CHO) cells are cultured to confluence on plates and are rendered apoptotic with staurosporine. Backscattering measurements are performed on pairs of treated and control samples at a sequence of times up to 6-h post-treatment. Initial results indicate that ESS is capable of discriminating between treated and control samples as early as 10- to 15-min post-treatment, much earlier than is sensed by standard assays for apoptosis. Extracted size distributions from treated and control samples show a decrease in Rayleigh and 150-nm scatterers, relative to control samples, with a corresponding increase in 200-nm particles. Work continues to correlate these size distributions with underlying morphology. To our knowledge, this is the first report of the use of backscattering spectral measurements to quantitatively monitor apoptosis in viable cell cultures in vitro.

  11. Dating Studies of Elephant Tusks Using Accelerator Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sideras-Haddad, E; Brown, T A

    A new method for determining the year of birth, the year of death, and hence, the age at death, of post-bomb and recently deceased elephants has been developed. The technique is based on Accelerator Mass Spectrometry radiocarbon analyses of small-sized samples extracted from along the length of a ge-line of an elephant tusk. The measured radiocarbon concentrations in the samples from a tusk can be compared to the {sup 14}C atmospheric bomb-pulse curve to derive the growth years of the initial and final samples from the tusk. Initial data from the application of this method to two tusks will bemore » presented. Potentially, the method may play a significant role in wildlife management practices of African national parks. Additionally, the method may contribute to the underpinnings of efforts to define new international trade regulations, which could, in effect, decrease poaching and the killing of very young animals.« less

  12. Alternative sample sizes for verification dose experiments and dose audits

    NASA Astrophysics Data System (ADS)

    Taylor, W. A.; Hansen, J. M.

    1999-01-01

    ISO 11137 (1995), "Sterilization of Health Care Products—Requirements for Validation and Routine Control—Radiation Sterilization", provides sampling plans for performing initial verification dose experiments and quarterly dose audits. Alternative sampling plans are presented which provide equivalent protection. These sampling plans can significantly reduce the cost of testing. These alternative sampling plans have been included in a draft ISO Technical Report (type 2). This paper examines the rational behind the proposed alternative sampling plans. The protection provided by the current verification and audit sampling plans is first examined. Then methods for identifying equivalent plans are highlighted. Finally, methods for comparing the cost associated with the different plans are provided. This paper includes additional guidance for selecting between the original and alternative sampling plans not included in the technical report.

  13. The effectiveness of increased apical enlargement in reducing intracanal bacteria.

    PubMed

    Card, Steven J; Sigurdsson, Asgeir; Orstavik, Dag; Trope, Martin

    2002-11-01

    It has been suggested that the apical portion of a root canal is not adequately disinfected by typical instrumentation regimens. The purpose of this study was to determine whether instrumentation to sizes larger than typically used would more effectively remove culturable bacteria from the canal. Forty patients with clinical and radiographic evidence of apical periodontitis were recruited from the endodontic clinic. Mandibular cuspids (n = 2), bicuspids (n = 11), and molars (mesial roots) (n = 27) were selected for the study. Bacterial sampling was performed upon access and after each of two consecutive instrumentations. The first instrumentation utilized 1% NaOCI and 0.04 taper ProFile rotary files. The cuspid and bicuspid canals were instrumented to a #8 size and the molar canals to a #7 size. The second instrumentation utilized LightSpeed files and 1% NaOCl irrigation for further enlargement of the apical third. Typically, molars were instrumented to size 60 and cuspid/bicuspid canals to size 80. Our findings show that 100% of the cuspid/bicuspid canals and 81.5% of the molar canals were rendered bacteria-free after the first instrumentation sizes. The molar results improved to 89% after the second instrumentation. Of the (59.3%) molar mesial canals without a clinically detectable communication, 93% were rendered bacteria-free with the first instrumentation. Using a Wilcoxon rank sum test, statistically significant differences (p < 0.0001) were found between the initial sample and the samples after the first and second instrumentations. The differences between the samples that followed the two instrumentation regimens were not significant (p = 0.0617). It is concluded that simple root canal systems (without multiple canal communications) may be rendered bacteria-free when preparation of this type is utilized.

  14. The Impact of Accelerating Faster than Exponential Population Growth on Genetic Variation

    PubMed Central

    Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian

    2014-01-01

    Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models’ effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times. PMID:24381333

  15. The impact of accelerating faster than exponential population growth on genetic variation.

    PubMed

    Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian

    2014-03-01

    Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.

  16. Myocardial Infarct Size by CMR in Clinical Cardioprotection Studies: Insights From Randomized Controlled Trials.

    PubMed

    Bulluck, Heerajnarain; Hammond-Haley, Matthew; Weinmann, Shane; Martinez-Macias, Roberto; Hausenloy, Derek J

    2017-03-01

    The aim of this study was to review randomized controlled trials (RCTs) using cardiac magnetic resonance (CMR) to assess myocardial infarct (MI) size in reperfused patients with ST-segment elevation myocardial infarction (STEMI). There is limited guidance on the use of CMR in clinical cardioprotection RCTs in patients with STEMI treated by primary percutaneous coronary intervention. All RCTs in which CMR was used to quantify MI size in patients with STEMI treated with primary percutaneous coronary intervention were identified and reviewed. Sixty-two RCTs (10,570 patients, January 2006 to November 2016) were included. One-third did not report CMR vendor or scanner strength, the contrast agent and dose used, and the MI size quantification technique. Gadopentetate dimeglumine was most commonly used, followed by gadoterate meglumine and gadobutrol at 0.20 mmol/kg each, with late gadolinium enhancement acquired at 10 min; in most RCTs, MI size was quantified manually, followed by the 5 standard deviation threshold; dropout rates were 9% for acute CMR only and 16% for paired acute and follow-up scans. Weighted mean acute and chronic MI sizes (≤12 h, initial TIMI [Thrombolysis in Myocardial Infarction] flow grade 0 to 3) from the control arms were 21 ± 14% and 15 ± 11% of the left ventricle, respectively, and could be used for future sample-size calculations. Pre-selecting patients most likely to benefit from the cardioprotective therapy (≤6 h, initial TIMI flow grade 0 or 1) reduced sample size by one-third. Other suggested recommendations for standardizing CMR in future RCTs included gadobutrol at 0.15 mmol/kg with late gadolinium enhancement at 15 min, manual or 6-SD threshold for MI quantification, performing acute CMR at 3 to 5 days and follow-up CMR at 6 months, and adequate reporting of the acquisition and analysis of CMR. There is significant heterogeneity in RCT design using CMR in patients with STEMI. The authors provide recommendations for standardizing the assessment of MI size using CMR in future clinical cardioprotection RCTs. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Height and seasonal growth pattern of jack pine full-sib families

    Treesearch

    Don E. Riemenschneider

    1981-01-01

    Total tree height, seasonal shoot elongation, dates of growth initiation and cessation, and mean daily growth rate were measured and analyzed for a population of jack pine full-sib families derived from inter-provenance crosses. Parental provenance had no effect on these variables although this may have been due to small sample size. Progenies differed significantly...

  18. Accuracy of Bayes and Logistic Regression Subscale Probabilities for Educational and Certification Tests

    ERIC Educational Resources Information Center

    Rudner, Lawrence

    2016-01-01

    In the machine learning literature, it is commonly accepted as fact that as calibration sample sizes increase, Naïve Bayes classifiers initially outperform Logistic Regression classifiers in terms of classification accuracy. Applied to subtests from an on-line final examination and from a highly regarded certification examination, this study shows…

  19. Adequacy of laser diffraction for soil particle size analysis

    PubMed Central

    Fisher, Peter; Aumann, Colin; Chia, Kohleth; O'Halloran, Nick; Chandra, Subhash

    2017-01-01

    Sedimentation has been a standard methodology for particle size analysis since the early 1900s. In recent years laser diffraction is beginning to replace sedimentation as the prefered technique in some industries, such as marine sediment analysis. However, for the particle size analysis of soils, which have a diverse range of both particle size and shape, laser diffraction still requires evaluation of its reliability. In this study, the sedimentation based sieve plummet balance method and the laser diffraction method were used to measure the particle size distribution of 22 soil samples representing four contrasting Australian Soil Orders. Initially, a precise wet riffling methodology was developed capable of obtaining representative samples within the recommended obscuration range for laser diffraction. It was found that repeatable results were obtained even if measurements were made at the extreme ends of the manufacturer’s recommended obscuration range. Results from statistical analysis suggested that the use of sample pretreatment to remove soil organic carbon (and possible traces of calcium-carbonate content) made minor differences to the laser diffraction particle size distributions compared to no pretreatment. These differences were found to be marginally statistically significant in the Podosol topsoil and Vertosol subsoil. There are well known reasons why sedimentation methods may be considered to ‘overestimate’ plate-like clay particles, while laser diffraction will ‘underestimate’ the proportion of clay particles. In this study we used Lin’s concordance correlation coefficient to determine the equivalence of laser diffraction and sieve plummet balance results. The results suggested that the laser diffraction equivalent thresholds corresponding to the sieve plummet balance cumulative particle sizes of < 2 μm, < 20 μm, and < 200 μm, were < 9 μm, < 26 μm, < 275 μm respectively. The many advantages of laser diffraction for soil particle size analysis, and the empirical results of this study, suggest that deployment of laser diffraction as a standard test procedure can provide reliable results, provided consistent sample preparation is used. PMID:28472043

  20. Room-temperature processing of CdSe quantum dots with tunable sizes

    NASA Astrophysics Data System (ADS)

    Joo, So-Yeong; Jeong, Da-Woon; Lee, Chan-Gi; Kim, Bum-Sung; Park, Hyun-Su; Kim, Woo-Byoung

    2017-06-01

    In this work, CdSe quantum dots (QDs) with tunable sizes have been fabricated via photo-induced chemical etching at room temperature, and the related reaction mechanism was investigated. The surface of QDs was oxidized by the holes generated through photon irradiation of oxygen species, and the obtained oxide layer was dissolved in an aqueous solution of 3-amino-1-propanol (APOL) with an APOL:H2O volume ratio of 5:1. The generated electrons promoted QD surface interactions with amino groups, which ultimately passivated surface defects. The absorption and photoluminescence emission peaks of the produced QDs were clearly blue-shifted about 26 nm with increasing time, and the resulting quantum yield for an 8 h etched sample was increased from 20% to 26%, as compared to the initial sample.

  1. Strain Amount Dependent Grain Size and Orientation Developments during Hot Compression of a Polycrystalline Nickel Based Superalloy

    PubMed Central

    He, Guoai; Tan, Liming; Liu, Feng; Huang, Lan; Huang, Zaiwang; Jiang, Liang

    2017-01-01

    Controlling grain size in polycrystalline nickel base superalloy is vital for obtaining required mechanical properties. Typically, a uniform and fine grain size is required throughout forging process to realize the superplastic deformation. Strain amount occupied a dominant position in manipulating the dynamic recrystallization (DRX) process and regulating the grain size of the alloy during hot forging. In this article, the high-throughput double cone specimen was introduced to yield wide-range strain in a single sample. Continuous variations of effective strain ranging from 0.23 to 1.65 across the whole sample were achieved after reaching a height reduction of 70%. Grain size is measured to be decreased from the edge to the center of specimen with increase of effective strain. Small misorientation tended to generate near the grain boundaries, which was manifested as piled-up dislocation in micromechanics. After the dislocation density reached a critical value, DRX progress would be initiated at higher deformation region, leading to the refinement of grain size. During this process, the transformations from low angle grain boundaries (LAGBs) to high angle grain boundaries (HAGBs) and from subgrains to DRX grains are found to occur. After the accomplishment of DRX progress, the neonatal grains are presented as having similar orientation inside the grain boundary. PMID:28772514

  2. Particle Size Effects on CL-20 Initiation and Detonation

    NASA Astrophysics Data System (ADS)

    Valancius, Cole; Bainbridge, Joe; Love, Cody; Richardson, Duane

    2017-06-01

    Particle size or specific surface area effects on explosives has been of interest to the explosives community for both application and modeling of initiation and detonation. Different particles sizes of CL-20 were used in detonator experiments to determine the effects of particle size on initiation, run-up to steady state detonation, and steady state detonation. Historical tests have demonstrated a direct relationship between particle size and initiation. However, historical tests inadvertently employed density gradients, making it difficult to discern the effects of particle size from the effects of density. Density gradients were removed from these tests using a larger diameter, shorter charge column, allowing for similar loading across different particle sizes. Without the density gradient, the effects of particle size on initiation and detonation are easier to determine. The results of which contrast with historical results, showing particle size does not directly affect initiation threshold.

  3. Synthesis, surface modification and characterisation of biocompatible magnetic iron oxide nanoparticles for biomedical applications.

    PubMed

    Mahdavi, Mahnaz; Ahmad, Mansor Bin; Haron, Md Jelas; Namvar, Farideh; Nadi, Behzad; Rahman, Mohamad Zaki Ab; Amin, Jamileh

    2013-06-27

    Superparamagnetic iron oxide nanoparticles (MNPs) with appropriate surface chemistry exhibit many interesting properties that can be exploited in a variety of biomedical applications such as magnetic resonance imaging contrast enhancement, tissue repair, hyperthermia, drug delivery and in cell separation. These applications required that the MNPs such as iron oxide Fe₃O₄ magnetic nanoparticles (Fe₃O₄ MNPs) having high magnetization values and particle size smaller than 100 nm. This paper reports the experimental detail for preparation of monodisperse oleic acid (OA)-coated Fe₃O₄ MNPs by chemical co-precipitation method to determine the optimum pH, initial temperature and stirring speed in order to obtain the MNPs with small particle size and size distribution that is needed for biomedical applications. The obtained nanoparticles were characterized by Fourier transform infrared spectroscopy (FTIR), transmission electron microscopy (TEM), scanning electron microscopy (SEM), energy dispersive X-ray fluorescence spectrometry (EDXRF), thermogravimetric analysis (TGA), X-ray powder diffraction (XRD), and vibrating sample magnetometer (VSM). The results show that the particle size as well as the magnetization of the MNPs was very much dependent on pH, initial temperature of Fe²⁺ and Fe³⁺ solutions and steering speed. The monodisperse Fe₃O₄ MNPs coated with oleic acid with size of 7.8 ± 1.9 nm were successfully prepared at optimum pH 11, initial temperature of 45°C and at stirring rate of 800 rpm. FTIR and XRD data reveal that the oleic acid molecules were adsorbed on the magnetic nanoparticles by chemisorption. Analyses of TEM show the oleic acid provided the Fe₃O₄ particles with better dispersibility. The synthesized Fe₃O₄ nanoparticles exhibited superparamagnetic behavior and the saturation magnetization of the Fe₃O₄ nanoparticles increased with the particle size.

  4. Effect of ultrasonic treatment and temperature on nanocrystalline TiO 2

    NASA Astrophysics Data System (ADS)

    Kim, D. H.; Ryu, H. W.; Moon, J. H.; Kim, J.

    Nanocrystalline TiO 2 particles were precipitated from the ethanol solution of titanium isopropoxide (Ti(O- iPr) 4) and H 2O 2 by refluxing at 80 °C for 48 h. The obtained particles were filtered and dried at 100 °C for 12 h. The dried powder itself, the sample with heating at 400 °C, and the sample with ultrasonically treating were prepared to investigate the effects of post treatments on materials characteristics and electrochemical properties of nanocrystalline TiO 2. The X-ray diffraction patterns of all of the samples were fitted well to the anatase phase. The field emission-TEM image of as-prepared sample shows a uniform spherical morphology with 5 nm particle size and the sample heated at 400 °C shows slightly increased particle size of about 10 nm while maintaining spherical shape. The sample treated with ultrasonic for 5 h or more at room temperature shows high aspect ratio particle shape with an average diameter of 5 nm and a length of 20 nm. According to the results of the electrochemical testing, as-prepared sample, the sample heated at 400 °C for 3 h, and the sample treated with ultrasonic show initial capacities of 270, 310 and 340 mAh g -1, respectively.

  5. The influence of the compression interface on the failure behavior and size effect of concrete

    NASA Astrophysics Data System (ADS)

    Kampmann, Raphael

    The failure behavior of concrete materials is not completely understood because conventional test methods fail to assess the material response independent of the sample size and shape. To study the influence of strength and strain affecting test conditions, four typical concrete sample types were experimentally evaluated in uniaxial compression and analyzed for strength, deformational behavior, crack initiation/propagation, and fracture patterns under varying boundary conditions. Both low friction and conventional compression interfaces were assessed. High-speed video technology was used to monitor macrocracking. Inferential data analysis proved reliably lower strength results for reduced surface friction at the compression interfaces, regardless of sample shape. Reciprocal comparisons revealed statistically significant strength differences between most sample shapes. Crack initiation and propagation was found to differ for dissimilar compression interfaces. The principal stress and strain distributions were analyzed, and the strain domain was found to resemble the experimental results, whereas the stress analysis failed to explain failure for reduced end confinement. Neither stresses nor strains indicated strength reductions due to reduced friction, and therefore, buckling effects were considered. The high-speed video analysis revealed localize buckling phenomena, regardless of end confinement. Slender elements were the result of low friction, and stocky fragments developed under conventional confinement. The critical buckling load increased accordingly. The research showed that current test methods do not reflect the "true'' compressive strength and that concrete failure is strain driven. Ultimate collapse results from buckling preceded by unstable cracking.

  6. Sample substitution can be an acceptable data-collection strategy: the case of the Belgian Health Interview Survey.

    PubMed

    Demarest, Stefaan; Molenberghs, Geert; Van der Heyden, Johan; Gisle, Lydia; Van Oyen, Herman; de Waleffe, Sandrine; Van Hal, Guido

    2017-11-01

    Substitution of non-participating households is used in the Belgian Health Interview Survey (BHIS) as a method to obtain the predefined net sample size. Yet, possible effects of applying substitution on response rates and health estimates remain uncertain. In this article, the process of substitution with its impact on response rates and health estimates is assessed. The response rates (RR)-both at household and individual level-according to the sampling criteria were calculated for each stage of the substitution process, together with the individual accrual rate (AR). Unweighted and weighted health estimates were calculated before and after applying substitution. Of the 10,468 members of 4878 initial households, 5904 members (RRind: 56.4%) of 2707 households (RRhh: 55.5%) participated. For the three successive (matched) substitutes, the RR dropped to 45%. The composition of the net sample resembles the one of the initial samples. Applying substitution did not produce any important distorting effects on the estimates. Applying substitution leads to an increase in non-participation, but does not impact the estimations.

  7. Size distributions of manure particles released under simulated rainfall.

    PubMed

    Pachepsky, Yakov A; Guber, Andrey K; Shelton, Daniel R; McCarty, Gregory W

    2009-03-01

    Manure and animal waste deposited on cropland and grazing lands serve as a source of microorganisms, some of which may be pathogenic. These microorganisms are released along with particles of dissolved manure during rainfall events. Relatively little if anything is known about the amounts and sizes of manure particles released during rainfall, that subsequently may serve as carriers, abode, and nutritional source for microorganisms. The objective of this work was to obtain and present the first experimental data on sizes of bovine manure particles released to runoff during simulated rainfall and leached through soil during subsequent infiltration. Experiments were conducted using 200 cm long boxes containing turfgrass soil sod; the boxes were designed so that rates of manure dissolution and subsequent infiltration and runoff could be monitored independently. Dairy manure was applied on the upper portion of boxes. Simulated rainfall (ca. 32.4 mm h(-1)) was applied for 90 min on boxes with stands of either live or dead grass. Electrical conductivity, turbidity, and particle size distributions obtained from laser diffractometry were determined in manure runoff and soil leachate samples. Turbidity of leachates and manure runoff samples decreased exponentially. Turbidity of manure runoff samples was on average 20% less than turbidity of soil leachate samples. Turbidity of leachate samples from boxes with dead grass was on average 30% less than from boxes with live grass. Particle size distributions in manure runoff and leachate suspensions remained remarkably stable after 15 min of runoff initiation, although the turbidity continued to decrease. Particles had the median diameter of 3.8 microm, and 90% of particles were between 0.6 and 17.8 microm. The particle size distributions were not affected by the grass status. Because manure particles are known to affect transport and retention of microbial pathogens in soil, more information needs to be collected about the concurrent release of pathogens and manure particles during rainfall events.

  8. Particulate, colloidal, and dissolved-phase associations of plutonium and americium in a water sample from well 1587 at the Rocky Flats Plant, Colorado

    USGS Publications Warehouse

    Harnish, R.A.; McKnight, Diane M.; Ranville, James F.

    1994-01-01

    In November 1991, the initial phase of a study to determine the dominant aqueous phases that control the transport of plutonium (Pu), americium (Am), and uranium (U) in surface and groundwater at the Rocky Flats Plant was undertaken by the U.S. Geological Survey. By use of the techniques of stirred-cell spiral-flow filtration and crossflow ultrafiltration, particles of three size fractions were collected from a 60-liter sample of water from well 1587 at the Rocky Flats Plant. These samples and corresponding filtrate samples were analyzed for Pu and Am. As calculated from the analysis of filtrates, 65 percent of Pu 239 and 240 activity in the sample was associated with particulate and largest colloidal size fractions. Particulate (22 percent) and colloidal (43 percent) fractions were determined to have significant activities in relation to whole-water Pu activity. Am and Pu 238 activities were too low to be analyzed. Examination and analyses of the particulate and colloidal phases indicated the presence of mineral species (iron oxyhydroxides and clay minerals) and natural organic matter that can facilitate the transport of actinides in ground water. High concentrations of the transition metals copper and zinc in the smallest colloid fractions strongly indicate a potential for organic complexation of metals, and potentially of actinides, in this size fraction.

  9. Determining the risk of cardiovascular disease using ion mobility of lipoproteins

    DOEpatents

    Benner, W. Henry; Krauss, Ronald M.; Blanche, Patricia J.

    2010-05-11

    A medical diagnostic method and instrumentation system for analyzing noncovalently bonded agglomerated biological particles is described. The method and system comprises: a method of preparation for the biological particles; an electrospray generator; an alpha particle radiation source; a differential mobility analyzer; a particle counter; and data acquisition and analysis means. The medical device is useful for the assessment of human diseases, such as cardiac disease risk and hyperlipidemia, by rapid quantitative analysis of lipoprotein fraction densities. Initially, purification procedures are described to reduce an initial blood sample to an analytical input to the instrument. The measured sizes from the analytical sample are correlated with densities, resulting in a spectrum of lipoprotein densities. The lipoprotein density distribution can then be used to characterize cardiac and other lipid-related health risks.

  10. An Analysis of Respondent Driven Sampling with Injection Drug Users (IDU) in Albania and the Russian Federation

    PubMed Central

    Stormer, Ame; Tun, Waimar; Harxhi, Arjan; Bodanovskaia, Zinaida; Yakovleva, Anna; Rusakova, Maia; Levina, Olga; Bani, Roland; Rjepaj, Klodian; Bino, Silva

    2006-01-01

    Injection drug users in Tirana, Albania and St. Petersburg, Russia were recruited into a study assessing HIV-related behaviors and HIV serostatus using Respondent Driven Sampling (RDS), a peer-driven recruitment sampling strategy that results in a probability sample. (Salganik M, Heckathorn DD. Sampling and estimation in hidden populations using respondent-driven sampling. Sociol Method. 2004;34:193–239). This paper presents a comparison of RDS implementation, findings on network and recruitment characteristics, and lessons learned. Initiated with 13 to 15 seeds, approximately 200 IDUs were recruited within 8 weeks. Information resulting from RDS indicates that social network patterns from the two studies differ greatly. Female IDUs in Tirana had smaller network sizes than male IDUs, unlike in St. Petersburg where female IDUs had larger network sizes than male IDUs. Recruitment patterns in each country also differed by demographic categories. Recruitment analyses indicate that IDUs form socially distinct groups by sex in Tirana, whereas there was a greater degree of gender mixing patterns in St. Petersburg. RDS proved to be an effective means of surveying these hard-to-reach populations. PMID:17075727

  11. Probability Sampling Method for a Hidden Population Using Respondent-Driven Sampling: Simulation for Cancer Survivors.

    PubMed

    Jung, Minsoo

    2015-01-01

    When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well.

  12. Shallow conduit processes of the 1991 Hekla eruption, Iceland

    NASA Astrophysics Data System (ADS)

    Gudnason, J.; Thordarson, T.; Houghton, B. F.

    2013-12-01

    On January 17, 1991 at 17:00 hrs, the 17th eruption of Hekla since 1104AD began. Lasting for almost two months, it produced 0.02 km3 of icelandite tephra and ~0.15km3 of icelandite lava. This eruption was the third of four eruptions since 1980 with a recurrence period of approximately 10 years, as opposed to a recurrence interval of c. 55 years for the eruptions in the period 1104AD to 1947AD. [1] The last four Hekla eruptions are typified by a 0.5-2 hour-long initial phase of subplinian intensity and discharge ranging from 2900-6700 m3/s [2]. In all 4 events the inital phase was followed by a sustained and relatively low-discharge(<20 m3/s) effusive phase, which in the case of Hekla 1991 lasted until the 11th March 1991 [1]. The initial phase of the 1991 event lasted for ~50 minutes and sustained an eruption plume that rose to 11.5 km in about 10 minutes [1]. The plume was dispersed to the NNE at velocities of 60-70 km/hr producing a well-sorted tephra fall covering >20,000 km2. Here we examine the first phase of the Hekla 1991 eruption with focus on vesiculation and fragmentation processes in the shallow conduit and ash production. Samples of the tephra fall were collected on snow immediately after the initial phase at multiple sites providing a representative spatial coverage within the 0.1mm isopach [3]. This set was augmented by samples collected in 2012 to provide tighter coverage of near vent region. Grain size of all samples has been measured down to 1 micron. Density measurements have been conducted on 4 near-vent pumice samples (100 clasts each) and the pumice vesicle size distribution has been determined in a selected subset of clasts. The reconstructed whole deposit grain size distribution exhibits a unimodal, log-normal distribution peaking at -3 phi, typical of dry, magmatic fragmentation. Pumice densities range from 520-880 kg/m3 and exhibit a tight unimodal and log-normal distribution indicating a mean vesicularity of 77% to 79% for the magma erupted during the initial phase. Along with preliminary results for bubble number density and vesicle size distribution this implies a single late-stage homogeneous bubble nucleation and very uniform conditions of magma fragmentation during this short-lived initial phase of the Hekla 1991 eruption. 1. Gudmundsson, A., et al., The 1991 eruption of Hekla, Iceland. Bulletin of Volcanology, 1992. 54(3): p. 238-246. 2. Höskuldsson, Á., Óskarsson, N., Pedersen, R., Grönvold, K., Vogfjörd, K. & Ólafsdóttir, R. 2007. The millennium eruption of Hekla in February 2000. Bull Volcanol, 70:169-182. 3. Larsen, G., E.G. Vilmundardóttir, and B. Thorkelsson, Heklugosid 1991: Gjóskufall og gjóskulagid frá fyrsta degi gossins. Náttúrufrædingurinn, 1992. 61(3-4): p. 159-176.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Fangyan; Zhang, Song; Chung Wong, Pak

    Effectively visualizing large graphs and capturing the statistical properties are two challenging tasks. To aid in these two tasks, many sampling approaches for graph simplification have been proposed, falling into three categories: node sampling, edge sampling, and traversal-based sampling. It is still unknown which approach is the best. We evaluate commonly used graph sampling methods through a combined visual and statistical comparison of graphs sampled at various rates. We conduct our evaluation on three graph models: random graphs, small-world graphs, and scale-free graphs. Initial results indicate that the effectiveness of a sampling method is dependent on the graph model, themore » size of the graph, and the desired statistical property. This benchmark study can be used as a guideline in choosing the appropriate method for a particular graph sampling task, and the results presented can be incorporated into graph visualization and analysis tools.« less

  14. Economic Analysis of Job-Related Attributes in Undergraduate Students' Initial Job Selection

    ERIC Educational Resources Information Center

    Jin, Yanhong H.; Mjelde, James W.; Litzenberg, Kerry K.

    2014-01-01

    Economic tradeoffs students place on location, salary, distances to natural resource amenities, size of the city where the job is located, and commuting times for their first college graduate job are estimated using a mixed logit model for a sample of Texas A&M University students. The Midwest is the least preferred area having a mean salary…

  15. Impact of rail pressure and biodiesel fueling on the particulate morphology and soot nanostructures from a common-rail turbocharged direct injection diesel engine

    DOE PAGES

    Ye, Peng; Vander Wal, Randy; Boehman, Andre L.; ...

    2014-12-26

    The effect of rail pressure and biodiesel fueling on the morphology of exhaust particulate agglomerates and the nanostructure of primary particles (soot) was investigated with a common-rail turbocharged direct injection diesel engine. The engine was operated at steady state on a dynamometer running at moderate speed with both low (30%) and medium–high (60%) fixed loads, and exhaust particulate was sampled for analysis. Ultra-low sulfur diesel and its 20% v/v blends with soybean methyl ester biodiesel were used. Fuel injection occurred in a single event around top dead center at three different injection pressures. Exhaust particulate samples were characterized with TEMmore » imaging, scanning mobility particle sizing, thermogravimetric analysis, Raman spectroscopy, and XRD analysis. Particulate morphology and oxidative reactivity were found to vary significantly with rail pressure and with biodiesel blend level. Higher biodiesel content led to increases in the primary particle size and oxidative reactivity but did not affect nanoscale disorder in the as-received samples. For particulates generated with higher injection pressures, the initial oxidative reactivity increased, but there was no detectable correlation with primary particle size or nanoscale disorder.« less

  16. Impact of rail pressure and biodiesel fueling on the particulate morphology and soot nanostructures from a common-rail turbocharged direct injection diesel engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Peng; Vander Wal, Randy; Boehman, Andre L.

    The effect of rail pressure and biodiesel fueling on the morphology of exhaust particulate agglomerates and the nanostructure of primary particles (soot) was investigated with a common-rail turbocharged direct injection diesel engine. The engine was operated at steady state on a dynamometer running at moderate speed with both low (30%) and medium–high (60%) fixed loads, and exhaust particulate was sampled for analysis. Ultra-low sulfur diesel and its 20% v/v blends with soybean methyl ester biodiesel were used. Fuel injection occurred in a single event around top dead center at three different injection pressures. Exhaust particulate samples were characterized with TEMmore » imaging, scanning mobility particle sizing, thermogravimetric analysis, Raman spectroscopy, and XRD analysis. Particulate morphology and oxidative reactivity were found to vary significantly with rail pressure and with biodiesel blend level. Higher biodiesel content led to increases in the primary particle size and oxidative reactivity but did not affect nanoscale disorder in the as-received samples. For particulates generated with higher injection pressures, the initial oxidative reactivity increased, but there was no detectable correlation with primary particle size or nanoscale disorder.« less

  17. Laser scattering method applied to determine the concentration of alfa 1-antitrypsin

    NASA Astrophysics Data System (ADS)

    Riquelme, Bibiana D.; Foresto, Patricia; Valverde, Juana R.; Rasia, Rodolfo J.

    2000-04-01

    An optical method has been developed to find (alpha) 1- antitrypsin unknown concentrations in human serum samples. This method applies light scattering properties exhibited by initially formed enzyme-inhibitor complexes and uses the curves of aggregation kinetics. It is independent of molecular hydrodynamics. Theoretical approaches showed that scattering properties of transient complexes obey the Rayleigh-Debie conditions. Experiments were performed on the Trypsin/(alpha) 1-antitrypsin system. Measurements were performed in newborn, adult and pregnant sera containing (alpha) 1-antitrypsin in the Trypsin excess region. The solution was excite by a He-Ne laser beam. SO, the particles formed during the reaction are scattering centers for the interacting light. The intensity of the scattered light at 90 degrees from incident beam depends on the nature of those scattering centers. Th rate of increase in scattered intensity depends on the variation in size and shape of the scatterers, being independent of its original size. Peak values of the first derivative linearly correlate with the concentration of (alpha) 1-antitrypsin originally present in the sample. Results are displayed 5 minutes after the initiation of the experimental process. Such speed is of great importance in the immuno-biochemistry determinations.

  18. New applications of X-ray tomography in pyrolysis of biomass: Biochar imaging

    DOE PAGES

    Jones, Keith; Ramakrishnan, Girish; Uchimiya, Minori; ...

    2015-01-30

    We report on the first ever use of non-destructive micrometer-scale synchrotron-computed microtomography (CMT) for biochar material characterization as a function of pyrolysis temperature. This innovative approach demonstrated an increase in micron-sized marcropore fraction of the Cotton Hull (CH) sample, resulting in up to 29% sample porosity. We have also found that initial porosity development occurred at low temperatures (below 350°C) of pyrolysis, consistent with chemical composition of CH. This innovative technique can be highly complementary to traditional BET measurements, considering that Barrett–Joyner–Halenda (BJH) analysis of pore size distribution cannot detect these macropores. Such information can be of substantial relevance tomore » environmental applications, given that water retention by biochars added to soils is controlled by macropore characteristic among the other factors. In addition, complementing our data with SEM, EDX, and XRF characterization techniques allowed us to develop a better understanding of evolution of biochar properties during its production, such presence of metals and initial morphological features of biochar before pyrolysis. These results have significant implications for using biochar as a soil additive and for clarifying the mechanisms of biofuel production by pyrolysis.« less

  19. New applications of X-ray tomography in pyrolysis of biomass: Biochar imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Keith; Ramakrishnan, Girish; Uchimiya, Minori

    We report on the first ever use of non-destructive micrometer-scale synchrotron-computed microtomography (CMT) for biochar material characterization as a function of pyrolysis temperature. This innovative approach demonstrated an increase in micron-sized marcropore fraction of the Cotton Hull (CH) sample, resulting in up to 29% sample porosity. We have also found that initial porosity development occurred at low temperatures (below 350°C) of pyrolysis, consistent with chemical composition of CH. This innovative technique can be highly complementary to traditional BET measurements, considering that Barrett–Joyner–Halenda (BJH) analysis of pore size distribution cannot detect these macropores. Such information can be of substantial relevance tomore » environmental applications, given that water retention by biochars added to soils is controlled by macropore characteristic among the other factors. In addition, complementing our data with SEM, EDX, and XRF characterization techniques allowed us to develop a better understanding of evolution of biochar properties during its production, such presence of metals and initial morphological features of biochar before pyrolysis. These results have significant implications for using biochar as a soil additive and for clarifying the mechanisms of biofuel production by pyrolysis.« less

  20. New Measurements of the Particle Size Distribution of Apollo 11 Lunar Soil 10084

    NASA Technical Reports Server (NTRS)

    McKay, D.S.; Cooper, B.L.; Riofrio, L.M.

    2009-01-01

    We have initiated a major new program to determine the grain size distribution of nearly all lunar soils collected in the Apollo program. Following the return of Apollo soil and core samples, a number of investigators including our own group performed grain size distribution studies and published the results [1-11]. Nearly all of these studies were done by sieving the samples, usually with a working fluid such as Freon(TradeMark) or water. We have measured the particle size distribution of lunar soil 10084,2005 in water, using a Microtrac(TradeMark) laser diffraction instrument. Details of our own sieving technique and protocol (also used in [11]). are given in [4]. While sieving usually produces accurate and reproducible results, it has disadvantages. It is very labor intensive and requires hours to days to perform properly. Even using automated sieve shaking devices, four or five days may be needed to sieve each sample, although multiple sieve stacks increases productivity. Second, sieving is subject to loss of grains through handling and weighing operations, and these losses are concentrated in the finest grain sizes. Loss from handling becomes a more acute problem when smaller amounts of material are used. While we were able to quantitatively sieve into 6 or 8 size fractions using starting soil masses as low as 50mg, attrition and handling problems limit the practicality of sieving smaller amounts. Third, sieving below 10 or 20microns is not practical because of the problems of grain loss, and smaller grains sticking to coarser grains. Sieving is completely impractical below about 5- 10microns. Consequently, sieving gives no information on the size distribution below approx.10 microns which includes the important submicrometer and nanoparticle size ranges. Finally, sieving creates a limited number of size bins and may therefore miss fine structure of the distribution which would be revealed by other methods that produce many smaller size bins.

  1. Boron Nanoparticles with High Hydrogen Loading: Mechanism for B-H Binding, Size Reduction, and Potential for Improved Combustibility and Specific Impulse

    DTIC Science & Technology

    2014-05-01

    particles in the sample. Mass spectrometry was, therefore, used to look for the signature of boranes in the milling jar headspace gas , and also in gases... headspace gas collected from the jar after milling in H2. For this experiment, argon was added to the initial gas mixture at a 12:1 H2:Ar ratio, in...Distribution A: approved for public release; distribution unlimited. 29    Mass spectrometry analysis. After milling selected samples, headspace gas

  2. Pb-Pb systematics of lunar rocks: differentiation, magmatic and impact history of the Moon

    NASA Astrophysics Data System (ADS)

    Nemchin, A.; Martin, W.; Norman, M. D.; Snape, J.; Bellucci, J. J.; Grange, M.

    2016-12-01

    Two independent decay chains in U-Pb system allow the determination of both ages and initial isotope compositions by analyzing only Pb in the samples. A typical Pb analysis represents a mixture of radiogenic Pb produced from the in situ U decay, initial Pb and laboratory contamination. Utilizing the ability of ion probes to analyse 10-30 micrometer-sized spots in the samples while avoiding fractures and other imperfections that commonly host contamination, permits extraction of pure lunar Pb compositions from the three component mixtures. This results in both accurate and precise ages of the rocks and their initial compositions. Lunar Mare and KREEP basalts postdating the major lunar bombardment are likely to represent such three component mixtures and are therefore appropriate for this approach, also giving an opportunity to investigate Pb evolution in their sources. A source evolution model constrained using available data indicates a major differentiation on the Moon at 4376±18 Ma and very radiogenic lunar mantle at this time. This age is likely to reflect the mean time of KREEP formation during the last stage of Magma Ocean differentiation. Rocks older than about 3.9 Ga are more complex than basalts and may include an extra Pb component, if modified by impacts. An example of this is presented by Pb-Pb data obtained for the anorthosite sample 62236, where the age of the rock is determined as 4367±29 Ma from analyses of CPx lamellae inside the large Opx grains: however large plagioclase crystals do not contain Pb in quantities sufficient for ion probe analysis, precluding determination of the initial Pb composition of the sample. Most of Pb is found in the brecciated parts of the anorthosite between the large grains. The composition of this Pb is similar to the initial Pb of 3909±17 Ma Apollo 16 breccia 66095, suggesting that is was injected into the anorthosite during a 3.9 Ga impact. Similar ca 3.9 Ga ages were determined for 1-2 millimeter size feldspathic clasts from several Apollo 14 breccias, where they are likely to date Pb homogenization during the Imbrium impact. Combined with U-Pb data obtained previously using U-bearing minerals such as zircon and phosphates, the new Pb-Pb data sets open an opportunity for a detailed chronological and isotopic investigation of lunar differentiation, magmatic evolution and impact history.

  3. Leaching behavior of U, Mn, Sr, and Pb from different particle-size fractions of uranium mill tailings.

    PubMed

    Liu, Bo; Peng, Tongjiang; Sun, Hongjuan

    2017-06-01

    Pollution by the release of heavy metals from tailings constitutes a potential threat to the environment. To characterize the processes governing the release of Mn, Sr, Pb, and U from the uranium mill tailings, a dynamic leaching test was applied for different size of uranium mill tailings samples. Inductively coupled plasma atomic emission spectroscopy (ICP-AES) and inductively coupled plasma mass spectrometry (ICP-MS) were performed to determine the content of Mn, Sr, Pb, and U in the leachates. The release of mobile Mn, Sr, Pb, and U fraction was slow, being faster in the initial stage and then attained a near steady-state condition. The experimental results demonstrate that the release of Mn, Sr, Pb, and U from uranium mill tailings with different size fractions is controlled by a variety of mechanisms. Surface wash-off is the release mechanism for Mn. The main release mechanism of Sr and Pb is the dissolution in the initial leaching stage. For U, a mixed process of wash-off and diffusion is the controlling mechanism.

  4. Cast aluminium single crystals cross the threshold from bulk to size-dependent stochastic plasticity

    NASA Astrophysics Data System (ADS)

    Krebs, J.; Rao, S. I.; Verheyden, S.; Miko, C.; Goodall, R.; Curtin, W. A.; Mortensen, A.

    2017-07-01

    Metals are known to exhibit mechanical behaviour at the nanoscale different to bulk samples. This transition typically initiates at the micrometre scale, yet existing techniques to produce micrometre-sized samples often introduce artefacts that can influence deformation mechanisms. Here, we demonstrate the casting of micrometre-scale aluminium single-crystal wires by infiltration of a salt mould. Samples have millimetre lengths, smooth surfaces, a range of crystallographic orientations, and a diameter D as small as 6 μm. The wires deform in bursts, at a stress that increases with decreasing D. Bursts greater than 200 nm account for roughly 50% of wire deformation and have exponentially distributed intensities. Dislocation dynamics simulations show that single-arm sources that produce large displacement bursts halted by stochastic cross-slip and lock formation explain microcast wire behaviour. This microcasting technique may be extended to several other metals or alloys and offers the possibility of exploring mechanical behaviour spanning the micrometre scale.

  5. The Tissint Martian meteorite as evidence for the largest impact excavation.

    PubMed

    Baziotis, Ioannis P; Liu, Yang; DeCarli, Paul S; Melosh, H Jay; McSween, Harry Y; Bodnar, Robert J; Taylor, Lawrence A

    2013-01-01

    High-pressure minerals in meteorites provide clues for the impact processes that excavated, launched and delivered these samples to Earth. Most Martian meteorites are suggested to have been excavated from 3 to 7 km diameter impact craters. Here we show that the Tissint meteorite, a 2011 meteorite fall, contains virtually all the high-pressure phases (seven minerals and two mineral glasses) that have been reported in isolated occurrences in other Martian meteorites. Particularly, one ringwoodite (75 × 140 μm(2)) represents the largest grain observed in all Martian samples. Collectively, the ubiquitous high-pressure minerals of unusually large sizes in Tissint indicate that shock metamorphism was widely dispersed in this sample (~25 GPa and ~2,000 °C). Using the size and growth kinetics of the ringwoodite grains, we infer an initial impact crater with ~90 km diameter, with a factor of 2 uncertainty. These energetic conditions imply alteration of any possible low-T minerals in Tissint.

  6. Marine sediment sample preparation for analysis for low concentrations of fine detrital gold

    USGS Publications Warehouse

    Clifton, H. Edward; Hubert, Arthur; Phillips, R. Lawrence

    1967-01-01

    Analyses by atomic absorption for detrital gold in more than 2,000 beach, offshore, marine-terrace, and alluvial sands from southern Oregon have shown that the values determined from raw or unconcentrated sediment containing small amounts of gold are neither reproducible nor representative of the initial sample. This difficulty results from a 'particle sparsity effect', whereby the analysis for gold in a given sample depends more upon the occurrence of random flakes of gold in the analyzed portion than upon the actual gold content of the sample. The particle sparsity effect can largely be eliminated by preparing a gold concentrate prior to analysis. A combination of sieve, gravimetric, and magnetic separation produces a satisfactory concentrate that yields accurate and reproducible analyses. In concentrates of nearly every marine and beach sand studied, the gold occurs in the nonmagnetic fraction smaller than 0.124 mm and with a specific gravity greater than 3.3. The grain size of gold in stream sediments is somewhat more variable. Analysis of concentrates provides a means of greatly increasing the sensitivity of the analytical technique in relation to the initial sample. Gold rarely exceeds 1 part per million in even the richest black sand analyzed; to establish the distribution of gold (and platinum) in marine sediments and its relationship to source and environmental factors, one commonly needs to know their content to the part per billion range. Analysis of a concentrate and recalculation to the value in the initial sample permits this degree of sensitivity.

  7. Seven ways to increase power without increasing N.

    PubMed

    Hansen, W B; Collins, L M

    1994-01-01

    Many readers of this monograph may wonder why a chapter on statistical power was included. After all, by now the issue of statistical power is in many respects mundane. Everyone knows that statistical power is a central research consideration, and certainly most National Institute on Drug Abuse grantees or prospective grantees understand the importance of including a power analysis in research proposals. However, there is ample evidence that, in practice, prevention researchers are not paying sufficient attention to statistical power. If they were, the findings observed by Hansen (1992) in a recent review of the prevention literature would not have emerged. Hansen (1992) examined statistical power based on 46 cohorts followed longitudinally, using nonparametric assumptions given the subjects' age at posttest and the numbers of subjects. Results of this analysis indicated that, in order for a study to attain 80-percent power for detecting differences between treatment and control groups, the difference between groups at posttest would need to be at least 8 percent (in the best studies) and as much as 16 percent (in the weakest studies). In order for a study to attain 80-percent power for detecting group differences in pre-post change, 22 of the 46 cohorts would have needed relative pre-post reductions of greater than 100 percent. Thirty-three of the 46 cohorts had less than 50-percent power to detect a 50-percent relative reduction in substance use. These results are consistent with other review findings (e.g., Lipsey 1990) that have shown a similar lack of power in a broad range of research topics. Thus, it seems that, although researchers are aware of the importance of statistical power (particularly of the necessity for calculating it when proposing research), they somehow are failing to end up with adequate power in their completed studies. This chapter argues that the failure of many prevention studies to maintain adequate statistical power is due to an overemphasis on sample size (N) as the only, or even the best, way to increase statistical power. It is easy to see how this overemphasis has come about. Sample size is easy to manipulate, has the advantage of being related to power in a straight-forward way, and usually is under the direct control of the researcher, except for limitations imposed by finances or subject availability. Another option for increasing power is to increase the alpha used for hypothesis-testing but, as very few researchers seriously consider significance levels much larger than the traditional .05, this strategy seldom is used. Of course, sample size is important, and the authors of this chapter are not recommending that researchers cease choosing sample sizes carefully. Rather, they argue that researchers should not confine themselves to increasing N to enhance power. It is important to take additional measures to maintain and improve power over and above making sure the initial sample size is sufficient. The authors recommend two general strategies. One strategy involves attempting to maintain the effective initial sample size so that power is not lost needlessly. The other strategy is to take measures to maximize the third factor that determines statistical power: effect size.

  8. The effect of processing parameters and solid concentration on the mechanical and microstructural properties of freeze-casted macroporous hydroxyapatite scaffolds.

    PubMed

    Farhangdoust, S; Zamanian, A; Yasaei, M; Khorami, M

    2013-01-01

    The design and fabrication of macroporous hydroxyapatite scaffolds, which could overcome current bone tissue engineering limitations, have been considered in recent years. In the current study, controlled unidirectional freeze-casting at different cooling rates was investigated. In the first step, different slurries with initial hydroxyapatite concentrations of 7-37.5 vol.% were prepared. In the next step, different cooling rates from 2 to 14 °C/min were applied to synthesize the porous scaffold. Additionally, a sintering temperature of 1350 °C was chosen as an optimum temperature. Finally, the phase composition (by XRD), microstructure (by SEM), mechanical characteristics, and the porosity of sintered samples were assessed. The porosity of the sintered samples was in a range of 45-87% and the compressive strengths varied from 0.4 MPa to 60 MPa. The mechanical strength of the scaffolds increased as a function of initial concentration, cooling rate, and sintering temperature. With regards to mechanical strength and pore size, the samples with the initial concentration and the cooling rate of 15 vol.% and 5 °C/min, respectively, showed better results. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Characterization of Apollo Regolith by X-Ray and Electron Microbeam Techniques: An Analog for Future Sample Return Missions

    NASA Technical Reports Server (NTRS)

    Zeigler, Ryan A.

    2015-01-01

    The Apollo missions collected 382 kg of rock and regolith from the Moon; approximately 1/3 of the sample mass collected was regolith. Lunar regolith consists of well mixed rocks, minerals, and glasses less than 1-centimeter n size. The majority of most surface regolith samples were sieved into less than 1, 1-2, 2-4, and 4-10- millimiter size fractions; a portion of most samples was re-served unsieved. The initial characterization and classification of most Apollo regolith particles was done primarily by binocular microscopy. Optical classification of regolith is difficult because (1) the finest fraction of the regolith coats and obscures the textures of the larger particles, and (b) not all lithologies or minerals are uniquely identifiable optically. In recent years, we have begun to use more modern x-ray beam techniques [1-3], coupled with high resolution 3D optical imaging techniques [4] to characterize Apollo and meteorite samples as part of the curation process. These techniques, particularly in concert with SEM imaging of less than 1-millimeter regolith grain mounts, allow for the rapid characterization of the components within a regolith.

  10. Numerical sedimentation particle-size analysis using the Discrete Element Method

    NASA Astrophysics Data System (ADS)

    Bravo, R.; Pérez-Aparicio, J. L.; Gómez-Hernández, J. J.

    2015-12-01

    Sedimentation tests are widely used to determine the particle size distribution of a granular sample. In this work, the Discrete Element Method interacts with the simulation of flow using the well known one-way-coupling method, a computationally affordable approach for the time-consuming numerical simulation of the hydrometer, buoyancy and pipette sedimentation tests. These tests are used in the laboratory to determine the particle-size distribution of fine-grained aggregates. Five samples with different particle-size distributions are modeled by about six million rigid spheres projected on two-dimensions, with diameters ranging from 2.5 ×10-6 m to 70 ×10-6 m, forming a water suspension in a sedimentation cylinder. DEM simulates the particle's movement considering laminar flow interactions of buoyant, drag and lubrication forces. The simulation provides the temporal/spatial distributions of densities and concentrations of the suspension. The numerical simulations cannot replace the laboratory tests since they need the final granulometry as initial data, but, as the results show, these simulations can identify the strong and weak points of each method and eventually recommend useful variations and draw conclusions on their validity, aspects very difficult to achieve in the laboratory.

  11. Multiscale modeling of porous ceramics using movable cellular automaton method

    NASA Astrophysics Data System (ADS)

    Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.

    2017-10-01

    The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.

  12. Challenges in reproducibility of genetic association studies: lessons learned from the obesity field.

    PubMed

    Li, A; Meyre, D

    2013-04-01

    A robust replication of initial genetic association findings has proved to be difficult in human complex diseases and more specifically in the obesity field. An obvious cause of non-replication in genetic association studies is the initial report of a false positive result, which can be explained by a non-heritable phenotype, insufficient sample size, improper correction for multiple testing, population stratification, technical biases, insufficient quality control or inappropriate statistical analyses. Replication may, however, be challenging even when the original study describes a true positive association. The reasons include underpowered replication samples, gene × gene, gene × environment interactions, genetic and phenotypic heterogeneity and subjective interpretation of data. In this review, we address classic pitfalls in genetic association studies and provide guidelines for proper discovery and replication genetic association studies with a specific focus on obesity.

  13. Method of assessing a lipid-related health risk based on ion mobility analysis of lipoproteins

    DOEpatents

    Benner, W. Henry; Krauss, Ronald M.; Blanche, Patricia J.

    2010-12-14

    A medical diagnostic method and instrumentation system for analyzing noncovalently bonded agglomerated biological particles is described. The method and system comprises: a method of preparation for the biological particles; an electrospray generator; an alpha particle radiation source; a differential mobility analyzer; a particle counter; and data acquisition and analysis means. The medical device is useful for the assessment of human diseases, such as cardiac disease risk and hyperlipidemia, by rapid quantitative analysis of lipoprotein fraction densities. Initially, purification procedures are described to reduce an initial blood sample to an analytical input to the instrument. The measured sizes from the analytical sample are correlated with densities, resulting in a spectrum of lipoprotein densities. The lipoprotein density distribution can then be used to characterize cardiac and other lipid-related health risks.

  14. Analysis of sexual behavior in adolescents.

    PubMed

    Teva, Inmaculada; Bermudez, M Paz; Ramiro, Maria T; Ramiro-Sanchez, Tamara

    2013-10-01

    The aim of this study was to describe some characteristics of vaginal, anal and oral sexual behavior in Spanish adolescents. It was a cross-sectional descriptive population study conducted using a probabilistic sample survey. The sample was composed of 4,612 male and female adolescents, of whom 1,686 reported having penetrative sexual experience. Sample size was established with a 97% confidence level and a 3% estimation error. Data collection took place in secondary education schools. Mean age of vaginal sex initiation was 15 years. Compared to females, males reported an earlier age of anal and oral sex initiation and a larger number of vaginal and anal sexual partners. Males also reported a higher frequency of penetrative sexual relations under the influence of alcohol or other drugs. A higher percentage of females than males reported not using a condom in their first anal sexual experience. This study provides a current overview of the sexual behavior of adolescents that can be useful for the design of future programs aimed at preventing HIV and sexually transmitted infections (STIs).

  15. Implications of Atmospheric Test Fallout Data for Nuclear Winter.

    NASA Astrophysics Data System (ADS)

    Baker, George Harold, III

    1987-09-01

    Atmospheric test fallout data have been used to determine admissable dust particle size distributions for nuclear winter studies. The research was originally motivated by extreme differences noted in the magnitude and longevity of dust effects predicted by particle size distributions routinely used in fallout predictions versus those used for nuclear winter studies. Three different sets of historical data have been analyzed: (1) Stratospheric burden of Strontium -90 and Tungsten-185, 1954-1967 (92 contributing events); (2) Continental U.S. Strontium-90 fallout through 1958 (75 contributing events); (3) Local Fallout from selected Nevada tests (16 events). The contribution of dust to possible long term climate effects following a nuclear exchange depends strongly on the particle size distribution. The distribution affects both the atmospheric residence time and optical depth. One dimensional models of stratospheric/tropospheric fallout removal were developed and used to identify optimum particle distributions. Results indicate that particle distributions which properly predict bulk stratospheric activity transfer tend to be somewhat smaller than number size distributions used in initial nuclear winter studies. In addition, both ^{90}Sr and ^ {185}W fallout behavior is better predicted by the lognormal distribution function than the prevalent power law hybrid function. It is shown that the power law behavior of particle samples may well be an aberration of gravitational cloud stratification. Results support the possible existence of two independent particle size distributions in clouds generated by surface or near surface bursts. One distribution governs late time stratospheric fallout, the other governs early time fallout. A bimodal lognormal distribution is proposed to describe the cloud particle population. The distribution predicts higher initial sunlight attenuation and lower late time attenuation than the power law hybrid function used in initial nuclear winter studies.

  16. Association of Cryptosporidium with bovine faecal particles and implications for risk reduction by settling within water supply reservoirs.

    PubMed

    Brookes, Justin D; Davies, Cheryl M; Hipsey, Matthew R; Antenucci, Jason P

    2006-03-01

    Artificial cow pats were seeded with Cryptosporidium oocysts and subjected to a simulated rainfall event. The runoff from the faecal pat was collected and different particle size fractions were collected within settling columns by exploiting the size-dependent settling velocities. Particle size and Cryptosporidium concentration distribution at 10 cm below the surface was measured at regular intervals over 24 h. Initially a large proportion of the total volume of particles belonged to the larger size classes (> 17 microm). However, throughout the course of the experiment, there was a sequential loss of the larger size classes from the sampling depth and a predominance of smaller particles (< 17 microm). The Cryptosporidium concentration at 10 cm depth did not change throughout the experiment. In the second experiment samples were taken from different depths within the settling column. Initially 26% of particles were in the size range 124-492 microm. However, as these large particles settled there was an enrichment at 30 cm after one hour (36.5-49.3%). There was a concomitant enrichment of smaller particles near the surface after 1 h and 24 h. For Pat 1 there was no difference in Cryptosporidium concentration with depth after 1 h and 24 h. In Pat 2 there was a difference in concentration between the surface and 30 cm after 24 h. However, this could be explained by the settling velocity of a single oocyst. The results suggested that oocysts are not associated with large particles, but exist in faecal runoff as single oocysts and hence have a low (0.1 m(d-1)) settling velocity. The implications of this low settling velocity on Cryptosporidium risk reduction within water supply reservoirs was investigated through the application of a three-dimensional model of oocyst fate and transport to a moderately sized reservoir (26 GL). The model indicated that the role of settling on oocyst concentration reduction within the water column is between one and three orders of magnitude less than that caused by advection and dilution, depending on the strength of hydrodynamic forcing.

  17. Compiling and editing agricultural strata boundaries with remotely sensed imagery and map attribute data using graphics workstations

    NASA Technical Reports Server (NTRS)

    Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt

    1991-01-01

    The USDA presently uses labor-intensive photographic interpretation procedures to delineate large geographical areas into manageable size sampling units for the estimation of domestic crop and livestock production. Computer software to automate the boundary delineation procedure, called the computer-assisted stratification and sampling (CASS) system, was developed using a Hewlett Packard color-graphics workstation. The CASS procedures display Thematic Mapper (TM) satellite digital imagery on a graphics display workstation as the backdrop for the onscreen delineation of sampling units. USGS Digital Line Graph (DLG) data for roads and waterways are displayed over the TM imagery to aid in identifying potential sample unit boundaries. Initial analysis conducted with three Missouri counties indicated that CASS was six times faster than the manual techniques in delineating sampling units.

  18. Condition of live fire-scarred ponderosa pine eleven years after removing partial cross-sections

    Treesearch

    Emily K. Heyerdahl; Steven J. McKay

    2008-01-01

    Our objective is to report mortality rates for ponderosa pine trees in Oregon ten to eleven years after removing a fire-scarred partial cross-section from them, and five years after an initial survey of post-sampling mortality. We surveyed 138 live trees from which we removed fire-scarred partial crosssections in 1994/95 and 387 similarly sized, unsampled neighbor...

  19. Characteristics and mechanism of laser-induced surface damage initiated by metal contaminants

    NASA Astrophysics Data System (ADS)

    Shi, Shuang; Sun, Mingying; Shi, Shuaixu; Li, Zhaoyan; Zhang, Ya-nan; Liu, Zhigang

    2015-08-01

    In high power laser facility, contaminants on optics surfaces reduce damage resistance of optical elements and then decrease their lifetime. By damage test experiments, laser damage induced by typical metal particles such as stainless steel 304 is studied. Optics samples with metal particles of different sizes on surfaces are prepared artificially based on the file and sieve. Damage test is implemented in air using a 1-on-1 mode. Results show that damage morphology and mechanism caused by particulate contamination on the incident and exit surfaces are quite different. Contaminants on the incident surface absorb laser energy and generate high temperature plasma during laser irradiation which can ablate optical surface. Metal particles melt and then the molten nano-particles redeposit around the initial particles. Central region of the damaged area bears the same outline as the initial particle because of the shielding effect. However, particles on the exit surface absorb a mass of energy, generate plasma and splash lots of smaller particles, only a few of them redeposit at the particle coverage area on the exit surface. Most of the laser energy is deposited at the interface of the metal particle and the sample surface, and thus damage size on the exit surface is larger than that on the incident surface. The areas covered by the metal particle are strongly damaged. And the damage sites are more serious than that on the incident surface. Besides damage phenomenon also depends on coating and substrate materials.

  20. Multi-passes warm rolling of AZ31 magnesium alloy, effect on evaluation of texture, microstructure, grain size and hardness

    NASA Astrophysics Data System (ADS)

    Kamran, J.; Hasan, B. A.; Tariq, N. H.; Izhar, S.; Sarwar, M.

    2014-06-01

    In this study the effect of multi-passes warm rolling of AZ31 magnesium alloy on texture, microstructure, grain size variation and hardness of as cast sample (A) and two rolled samples (B & C) taken from different locations of the as-cast ingot was investigated. The purpose was to enhance the formability of AZ31 alloy in order to help manufacturability. It was observed that multi-passes warm rolling (250°C to 350°C) of samples B & C with initial thickness 7.76mm and 7.73 mm was successfully achieved up to 85% reduction without any edge or surface cracks in ten steps with a total of 26 passes. The step numbers 1 to 4 consist of 5, 2, 11 and 3 passes respectively, the remaining steps 5 to 10 were single pass rolls. In each discrete step a fixed roll gap is used in a way that true strain per step increases very slowly from 0.0067 in the first step to 0.7118 in the 26th step. Both samples B & C showed very similar behavior after 26th pass and were successfully rolled up to 85% thickness reduction. However, during 10th step (27th pass) with a true strain value of 0.772 the sample B experienced very severe surface as well as edge cracks. Sample C was therefore not rolled for the 10th step and retained after 26 passes. Both samples were studied in terms of their basal texture, microstructure, grain size and hardness. Sample C showed an equiaxed grain structure after 85% total reduction. The equiaxed grain structure of sample C may be due to the effective involvement of dynamic recrystallization (DRX) which led to formation of these grains with relatively low misorientations with respect to the parent as cast grains. The sample B on the other hand showed a microstructure in which all the grains were elongated along the rolling direction (RD) after 90 % total reduction and DRX could not effectively play its role due to heavy strain and lack of plastic deformation systems. The microstructure of as cast sample showed a near-random texture (mrd 4.3), with average grain size of 44 & micro-hardness of 52 Hv. The grain size of sample B and C was 14μm and 27μm respectively and mrd intensity of basal texture was 5.34 and 5.46 respectively. The hardness of sample B and C came out to be 91 and 66 Hv respectively due to reduction in grain size and followed the well known Hall-Petch relationship.

  1. Influence of initial heating during final high temperature annealing on the offset of primary and secondary recrystallization in Cu-bearing grain oriented electrical steels

    NASA Astrophysics Data System (ADS)

    Rodriguez-Calvillo, P.; Leunis, E.; Van De Putte, T.; Jacobs, S.; Zacek, O.; Saikaly, W.

    2018-04-01

    The industrial production route of Grain Oriented Electrical Steels (GOES) is complex and fine-tuned for each grade. Its metallurgical process requires in all cases the abnormal grain growth (AGG) of the Goss orientation during the final high temperature annealing (HTA). The exact mechanism of AGG is not yet fully understood, but is controlled by the different inhibition systems, namely MnS, AlN and CuxS, their size and distribution, and the initial primary recrystallized grain size. Therefore, among other parameters, the initial heating stage during the HTA is crucial for the proper development of primary and secondary recrystallized microstructures. Cold rolled 0.3 mm Cu-bearing Grain Oriented Electrical Steel has been submitted to interrupted annealing experiments in a lab tubular furnace. Two different annealing cycles were applied:• Constant heating at 30°C/h up to 1000°C. Two step cycle with initial heating at 100°C/h up to 600°C, followed by 18 h soaking at 600°C and then heating at 30°C/h up to 1050°C. The materials are analyzed in terms of their magnetic properties, grain size, texture and precipitates. The characteristic magnetic properties are analyzed for the different extraction temperatures and Cycles. As the annealing was progressing, the coercivity values (Hc 1.7T [A/m]) decreased, showing two abrupt drops, which can be associated to the on-set of primary and secondary recrystallization. The primary recrystallized grain sizes and recrystallized fractions are fitted to a model using a non-isothermal approach. This analysis shows that, although the resulting grain sizes were similar, the kinetics for the two step annealing were faster due to the lower recovery. The on-set of secondary recrystallization was also shifted to higher temperatures in the case of the continuous heating cycle, which might end in different final grain sizes and final magnetic properties. In both samples, nearly all the observed precipitates are Al-Si-Mn nitrides, ranging from pure AlN to Si4Mn-nitride.

  2. Effect of MeV electron irradiation on the free volume of polyimide

    NASA Astrophysics Data System (ADS)

    Alegaonkar, P. S.; Bhoraskar, V. N.

    2004-08-01

    The free volume of the microvoids in the polyimide samples, irradiated with 6 MeV electrons, was measured by the positron annihilation technique. The free volume initially decreased the virgin value from similar to13.70 to similar to10.98 Angstrom(3) and then increased to similar to18.11 Angstrom(3) with increasing the electron fluence, over the range of 5 x 10(14) - 5 x 10(15) e/cm(2). The evolution of gaseous species from the polyimide during electron irradiation was confirmed by the residual gas analysis technique. The polyimide samples irradiated with 6 MeV electrons in AgNO3 solution were studied with the Rutherford back scattering technique. The diffusion of silver in these polyimide samples was observed for fluences >2 x 10(15) e/cm(2), at which microvoids of size greater than or equal to3 Angstrom are produced. Silver atoms did not diffuse in the polyimide samples, which were first irradiated with electrons and then immersed in AgNO3 solution. These results indicate that during electron irradiation, the microvoids with size greater than or equal to3 Angstrom were retained in the surface region through which silver atoms of size similar to2.88 Angstrom could diffuse into the polyimide. The average depth of diffusion of silver atoms in the polyimide was similar to2.5 mum.

  3. Physicochemical Characterization of Capstone Depleted Uranium Aerosols I: Uranium Concentration in Aerosols as a Function of Time and Particle Size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parkhurst, MaryAnn; Cheng, Yung-Sung; Kenoyer, Judson L.

    2009-03-01

    During the Capstone Depleted Uranium (DU) Aerosol Study, aerosols containing depleted uranium were produced inside unventilated armored vehicles (i.e., Abrams tanks and Bradley Fighting Vehicles) by perforation with large-caliber DU penetrators. These aerosols were collected and characterized, and the data were subsequently used to assess human health risks to personnel exposed to DU aerosols. The DU content of each aerosol sample was first quantified by radioanalytical methods, and selected samples, primarily those from the cyclone separator grit chambers, were analyzed radiochemically. Deposition occurred inside the vehicles as particles settled on interior surfaces. Settling rates of uranium from the aerosols weremore » evaluated using filter cassette samples that collected aerosol as total mass over eight sequential time intervals. A moving filter was used to collect aerosol samples over time particularly within the first minute after the shot. The results demonstrate that the peak uranium concentration in the aerosol occurred in the first 10 s, and the concentration decreased in the Abrams tank shots to about 50% within 1 min and to less than 2% 30 min after perforation. In the Bradley vehicle, the initial (and maximum) uranium concentration was lower than those observed in the Abrams tank and decreased more slowly. Uranium mass concentrations in the aerosols as a function of particle size were evaluated using samples collected in the cyclone samplers, which collected aerosol continuously for 2 h post perforation. The percentages of uranium mass in the cyclone separator stages from the Abrams tank tests ranged from 38% to 72% and, in most cases, varied with particle size, typically with less uranium associated with the smaller particle sizes. Results with the Bradley vehicle ranged from 18% to 29% and were not specifically correlated with particle size.« less

  4. Physicochemical characterization of Capstone depleted uranium aerosols I: uranium concentration in aerosols as a function of time and particle size.

    PubMed

    Parkhurst, Mary Ann; Cheng, Yung Sung; Kenoyer, Judson L; Traub, Richard J

    2009-03-01

    During the Capstone Depleted Uranium (DU) Aerosol Study, aerosols containing DU were produced inside unventilated armored vehicles (i.e., Abrams tanks and Bradley Fighting Vehicles) by perforation with large-caliber DU penetrators. These aerosols were collected and characterized, and the data were subsequently used to assess human health risks to personnel exposed to DU aerosols. The DU content of each aerosol sample was first quantified by radioanalytical methods, and selected samples, primarily those from the cyclone separator grit chambers, were analyzed radiochemically. Deposition occurred inside the vehicles as particles settled on interior surfaces. Settling rates of uranium from the aerosols were evaluated using filter cassette samples that collected aerosol as total mass over eight sequential time intervals. A moving filter was used to collect aerosol samples over time, particularly within the first minute after a shot. The results demonstrate that the peak uranium concentration in the aerosol occurred in the first 10 s after perforation, and the concentration decreased in the Abrams tank shots to about 50% within 1 min and to less than 2% after 30 min. The initial and maximum uranium concentrations were lower in the Bradley vehicle than those observed in the Abrams tank, and the concentration levels decreased more slowly. Uranium mass concentrations in the aerosols as a function of particle size were evaluated using samples collected in a cyclone sampler, which collected aerosol continuously for 2 h after perforation. The percentages of uranium mass in the cyclone separator stages ranged from 38 to 72% for the Abrams tank with conventional armor. In most cases, it varied with particle size, typically with less uranium associated with the smaller particle sizes. Neither the Abrams tank with DU armor nor the Bradley vehicle results were specifically correlated with particle size and can best be represented by their average uranium mass concentrations of 65 and 24%, respectively.

  5. Personalized prediction of chronic wound healing: an exponential mixed effects model using stereophotogrammetric measurement.

    PubMed

    Xu, Yifan; Sun, Jiayang; Carter, Rebecca R; Bogie, Kath M

    2014-05-01

    Stereophotogrammetric digital imaging enables rapid and accurate detailed 3D wound monitoring. This rich data source was used to develop a statistically validated model to provide personalized predictive healing information for chronic wounds. 147 valid wound images were obtained from a sample of 13 category III/IV pressure ulcers from 10 individuals with spinal cord injury. Statistical comparison of several models indicated the best fit for the clinical data was a personalized mixed-effects exponential model (pMEE), with initial wound size and time as predictors and observed wound size as the response variable. Random effects capture personalized differences. Other models are only valid when wound size constantly decreases. This is often not achieved for clinical wounds. Our model accommodates this reality. Two criteria to determine effective healing time outcomes are proposed: r-fold wound size reduction time, t(r-fold), is defined as the time when wound size reduces to 1/r of initial size. t(δ) is defined as the time when the rate of the wound healing/size change reduces to a predetermined threshold δ < 0. Healing rate differs from patient to patient. Model development and validation indicates that accurate monitoring of wound geometry can adaptively predict healing progression and that larger wounds heal more rapidly. Accuracy of the prediction curve in the current model improves with each additional evaluation. Routine assessment of wounds using detailed stereophotogrammetric imaging can provide personalized predictions of wound healing time. Application of a valid model will help the clinical team to determine wound management care pathways. Published by Elsevier Ltd.

  6. Time course of degradation of cardiac troponin I in patients with acute ST-elevation myocardial infarction: the ASSENT-2 troponin substudy.

    PubMed

    Madsen, Lene H; Christensen, Geir; Lund, Terje; Serebruany, Victor L; Granger, Chris B; Hoen, Ingvild; Grieg, Zanina; Alexander, John H; Jaffe, Allan S; Van Eyk, Jennifer E; Atar, Dan

    2006-11-10

    Although measurement of troponin is widely used for diagnosing acute myocardial infarction (AMI), its diagnostic potential may be increased by a more complete characterization of its molecular appearance and degradation in the blood. The aim of this study was to define the time course of cardiac troponin I (cTnI) degradation in patients with acute ST-elevation myocardial infarction (STEMI). In the ASSENT-2 substudy, 26 males hospitalized with STEMI were randomized to 2 different thrombolytic drugs within 6 hours after onset of symptoms. Blood samples were obtained just before initiation of thrombolysis and at 30 minutes intervals (7 samples per patient). Western blot analysis was performed using anti-cTnI antibodies and compared with serum concentrations of cTnI. All patients exceeded the cTnI cutoff for AMI during the sampling period; at initiation of therapy, 23 had elevated cTnI values. All patients demonstrated 2 bands on immunoblot: intact cTnI and a single degradation product as early as 90 minutes after onset of symptoms. On subsequent samples, 15 of 26 patients showed multiple degradation products with up to 7 degradation bands. The appearance of fragments was correlated with higher levels of cTnI (P<0.001) and time to initiation of treatment (P=0.058). This study defines for the first time the initial time course of cTnI degradation in STEMI. Intact cTnI and a single degradation product were detectable on immunoblot as early as 90 minutes after onset of symptoms with further degradation after 165 minutes. Infarct size and time to initiation of treatment was the major determinant for degradation.

  7. Sample Design, Sample Augmentation, and Estimation for Wave 2 of the NSHAP

    PubMed Central

    English, Ned; Pedlow, Steven; Kwok, Peter K.

    2014-01-01

    Objectives. The sample for the second wave (2010) of National Social Life, Health, and Aging Project (NSHAP) was designed to increase the scientific value of the Wave 1 (2005) data set by revisiting sample members 5 years after their initial interviews and augmenting this sample where possible. Method. There were 2 important innovations. First, the scope of the study was expanded by collecting data from coresident spouses or romantic partners. Second, to maximize the representativeness of the Wave 2 data, nonrespondents from Wave 1 were again approached for interview in the Wave 2 sample. Results. The overall unconditional response rate for the Wave 2 panel was 74%; the conditional response rate of Wave 1 respondents was 89%; the conditional response rate of partners was 84%; and the conversion rate for Wave 1 nonrespondents was 26%. Discussion. The inclusion of coresident partners enhanced the study by allowing the examination of how intimate, household relationships are related to health trajectories and by augmenting the size of the NSHAP sample size for this and future waves. The uncommon strategy of returning to Wave 1 nonrespondents reduced potential bias by ensuring that to the extent possible the whole of the original sample forms the basis for the field effort. NSHAP Wave 2 achieved its field objectives of consolidating the panel, recruiting their resident spouses or romantic partners, and converting a significant proportion of Wave 1 nonrespondents. PMID:25360016

  8. Effect of cryopreservation methods and precryopreservation storage on bottlenose dolphin (Tursiops truncatus) spermatozoa.

    PubMed

    Robeck, T R; O'Brien, J K

    2004-05-01

    Research was conducted to develop an effective method for cryopreserving bottlenose dolphin (Tursiops truncatus) semen processed immediately after collection or after 24-h liquid storage. In each of two experiments, four ejaculates were collected from three males. In experiment 1, three cryopreservation methods (CM1, CM2, and CM3), two straw sizes (0.25 and 0.5 ml), and three thawing rates (slow, medium, and fast) were evaluated. Evaluations were conducted at collection, prefreeze, and 0-, 3-, and 6-h postthaw. A sperm motility index (SMI; total motility [TM] x % progressive motility [PPM] x kinetic rating [KR, scale of 0-5]) was calculated and expressed as a percentage MI of the initial ejaculate. For all ejaculates, initial TM and PPM were greater than 85%, and KR was five. At 0-h postthaw, differences in SMI among cryopreservation methods and thaw rates were observed (P < 0.05), but no effect of straw size was observed. In experiment 2, ejaculates were divided into four aliquots for dilution (1:1) and storage at 4 degrees C with a skim milk- glucose or a N-tris(hydroxymethyl)methyl-2-aminoethane sulfonic acid (TES)-TRIS egg yolk solution and at 21 degrees C with a Hepes-Tyrode balanced salt solution (containing bovine albumin and HEPES) (TALP) medium or no dilution. After 24 h, samples were frozen and thawed (CM3, 0.5-ml straws, fast thawing rate) at 20 x 10(6) spermatozoa ml(-1) (low concentration) or at 100 x 10(6) spermatozoa ml(-1) (standard concentration). The SMI at 0-h postthaw was higher for samples stored at 4 degrees C than for samples stored at 21 degrees C (P < 0.001), and at 6-h postthaw, the SMI was higher for samples frozen at the standard concentration than for samples frozen at the low concentration (P < 0.05). For both experiments, acrosome integrity was similar across treatments. In summary, a semen cryopreservation protocol applied to fresh or liquid-stored semen maintained high levels of initial ejaculate sperm characteristics.

  9. A comparison of defect size and film quality obtained from Film digitized image and digital image radiographs

    NASA Astrophysics Data System (ADS)

    Kamlangkeng, Poramate; Asa, Prateepasen; Mai, Noipitak

    2014-06-01

    Digital radiographic testing is an acceptable premature nondestructive examination technique. Its performance and limitation comparing to the old technique are still not widely well known. In this paper conducted the study on the comparison of the accuracy of the defect size measurement and film quality obtained from film and digital radiograph techniques by testing in specimens and known size sample defect. Initially, one specimen was built with three types of internal defect; which are longitudinal cracking, lack of fusion, and porosity. For the known size sample defect, it was machined various geometrical size for comparing the accuracy of the measuring defect size to the real size in both film and digital images. To compare the image quality by considering at smallest detectable wire and the three defect images. In this research used Image Quality Indicator (IQI) of wire type 10/16 FE EN BS EN-462-1-1994. The radiographic films were produced by X-ray and gamma ray using Kodak AA400 size 3.5x8 inches, while the digital images were produced by Fuji image plate type ST-VI with 100 micrometers resolution. During the tests, a radiator GE model MF3 was implemented. The applied energy is varied from 120 to 220 kV and the current from 1.2 to 3.0 mA. The intensity of Iridium 192 gamma ray is in the range of 24-25 Curie. Under the mentioned conditions, the results showed that the deviation of the defect size measurement comparing to the real size obtained from the digital image radiographs is below than that of the film digitized, whereas the quality of film digitizer radiographs is higher in comparison.

  10. Annual variation in polychlorinated biphenyl (PCB) exposure in tree swallow (Tachycineta bicolor) eggs and nestlings at Great Lakes Restoration Initiative (GLRI) study sites

    USGS Publications Warehouse

    Custer, Christine M.; Custer, Thomas W.; Dummer, Paul; Goldberg, Diana R.; Franson, J. Christian

    2018-01-01

    Tree swallow (Tachycineta bicolor) eggs and nestlings were collected from 16 sites across the Great Lakes to quantify normal annual variation in total polychlorinated biphenyl (PCB) exposure and to validate the sample size choice in earlier work. A sample size of five eggs or five nestlings per site was adequate to quantify exposure to PCBs in tree swallows given the current exposure levels and variation. There was no difference in PCB exposure in two randomly selected sets of five eggs collected in the same year, but analyzed in different years. Additionally, there was only modest annual variation in exposure, with between 69% (nestlings) and 73% (eggs) of sites having no differences between years. There was a tendency, both statistically and qualitatively, for there to be less exposure in the second year compared to the first year.

  11. Plasma nanotexturing of silicon surfaces for photovoltaics applications: influence of initial surface finish on the evolution of topographical and optical properties

    PubMed Central

    FISCHER, GUILLAUME; DRAHI, ETIENNE; FOLDYNA, MARTIN; GERMER, THOMAS A.; JOHNSON, ERIK V.

    2018-01-01

    Using a plasma to generate a surface texture with feature sizes on the order of tens to hundreds of nanometers (“nanotexturing”) is a promising technique being considered to improve efficiency in thin, high-efficiency crystalline silicon solar cells. This study investigates the evolution of the optical properties of silicon samples with various initial surface finishes (from mirror polish to various states of micron-scale roughness) during a plasma nanotexturing process. It is shown that during said process, the appearance and growth of nanocone-like structures are essentially independent of the initial surface finish, as quantified by the auto-correlation function of the surface morphology. During the first stage of the process (2 min to 15 min etching), the reflectance and light-trapping abilities of the nanotextured surfaces are strongly influenced by the initial surface roughness; however, the differences tend to diminish as the nanostructures become larger. For the longest etching times (15 min or more), the effective reflectance is less than 5 % and a strong anisotropic scattering behavior is also observed for all samples, leading to very elevated levels of light-trapping. PMID:29220984

  12. Understanding the size and character of fouling-causing substances from effluent organic matter (EfOM) in low-pressure membrane filtration.

    PubMed

    Laabs, Claudia N; Amy, Gary L; Jekel, Martin

    2006-07-15

    Stirred cell tests with microfiltration (MF) and ultrafiltration (UF) membranes show high flux decline for WWTP effluents. For the MF membrane, for example, the flux declines within 15 min to 70-80% of the initial flux (J0 is in the range of 1000 L/m2h to 1500 L/m2h). This time corresponds to the filtration of a cumulative volume of 110 L/m2. Feed and permeate samples of the stirred cell tests are analyzed by size-exclusion chromatography (SEC) with on-line organic carbon and UVA254 detection. The resulting chromatograms exhibit a clear difference between the feed and permeate samples in the so-called polysaccharide (PS) peak. The substances eluting in the PS peak (organic colloids, polysaccharides, and proteins) are retained completely by UF membranes and partly by MF membranes, and are responsible for the observed fouling. By sequential filtration experiments the sizes of these macromolecules are determined to be in the range of 10 to 100 nm.

  13. Zinc Nucleation and Growth in Microgravity

    NASA Technical Reports Server (NTRS)

    Michael, B. Patrick; Nuth, J. A., III; Lilleleht, L. U.; Vondrak, Richard R. (Technical Monitor)

    2000-01-01

    We report our experiences with zinc nucleation in a microgravity environment aboard NASA's Reduced Gravity Research Facility. Zinc vapor is produced by a heater in a vacuum chamber containing argon gas. Nucleation is induced by cooling and its onset is easily detected visually by the appearance of a cloud of solid, at least partially crystalline zinc particles. Size distribution of these particles is monitored in situ by photon correlation spectroscopy. Samples of particles are also extracted for later analysis by SEM. The initially rapid increase in particle size is followed by a slower period of growth. We apply Scaled Nucleation Theory to our data and find that the derived critical temperature of zinc, the critical cluster size at nucleation, and the surface tension values are all in reasonably good agreement with their accepted literature values.

  14. Probe measurements and numerical model predictions of evolving size distributions in premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Filippo, A.; Sgro, L.A.; Lanzuolo, G.

    2009-09-15

    Particle size distributions (PSDs), measured with a dilution probe and a Differential Mobility Analyzer (DMA), and numerical predictions of these PSDs, based on a model that includes only coagulation or alternatively inception and coagulation, are compared to investigate particle growth processes and possible sampling artifacts in the post-flame region of a C/O = 0.65 premixed laminar ethylene-air flame. Inputs to the numerical model are the PSD measured early in the flame (the initial condition for the aerosol population) and the temperature profile measured along the flame's axial centerline. The measured PSDs are initially unimodal, with a modal mobility diameter ofmore » 2.2 nm, and become bimodal later in the post-flame region. The smaller mode is best predicted with a size-dependent coagulation model, which allows some fraction of the smallest particles to escape collisions without resulting in coalescence or coagulation through the size-dependent coagulation efficiency ({gamma}{sub SD}). Instead, when {gamma} = 1 and the coagulation rate is equal to the collision rate for all particles regardless of their size, the coagulation model significantly under predicts the number concentration of both modes and over predicts the size of the largest particles in the distribution compared to the measured size distributions at various heights above the burner. The coagulation ({gamma}{sub SD}) model alone is unable to reproduce well the larger particle mode (mode II). Combining persistent nucleation with size-dependent coagulation brings the predicted PSDs to within experimental error of the measurements, which seems to suggest that surface growth processes are relatively insignificant in these flames. Shifting measured PSDs a few mm closer to the burner surface, generally adopted to correct for probe perturbations, does not produce a better matching between the experimental and the numerical results. (author)« less

  15. On the Impact Origin of Phobos and Deimos. I. Thermodynamic and Physical Aspects

    NASA Astrophysics Data System (ADS)

    Hyodo, Ryuki; Genda, Hidenori; Charnoz, Sébastien; Rosenblatt, Pascal

    2017-08-01

    Phobos and Deimos are the two small moons of Mars. Recent works have shown that they can accrete within an impact-generated disk. However, the detailed structure and initial thermodynamic properties of the disk are poorly understood. In this paper, we perform high-resolution SPH simulations of the Martian moon-forming giant impact that can also form the Borealis basin. This giant impact heats up the disk material (around ˜2000 K in temperature) with an entropy increase of ˜1500 J K-1 kg-1. Thus, the disk material should be mostly molten, though a tiny fraction of disk material (< 5 % ) would even experience vaporization. Typically, a piece of molten disk material is estimated to be meter sized owing to the fragmentation regulated by their shear velocity and surface tension during the impact process. The disk materials initially have highly eccentric orbits (e ˜ 0.6-0.9), and successive collisions between meter-sized fragments at high impact velocity (˜1-5 km s-1) can grind them down to ˜100 μm sized particles. On the other hand, a tiny amount of vaporized disk material condenses into ˜0.1 μm sized grains. Thus, the building blocks of the Martian moons are expected to be a mixture of these different sized particles from meter-sized down to ˜100 μm sized particles and ˜0.1 μm sized grains. Our simulations also suggest that the building blocks of Phobos and Deimos contain both impactor and Martian materials (at least 35%), most of which come from the Martian mantle (50-150 km in depth; at least 50%). Our results will give useful information for planning a future sample return mission to Martian moons, such as JAXA’s MMX (Martian Moons eXploration) mission.

  16. Characterization-curing-property studies of HBRF 55A resin formulations

    NASA Technical Reports Server (NTRS)

    Pearce, E. M.; Mijovic, J.

    1985-01-01

    Characterization curing property investigations on HBRF 55A resin formulations are reported. The initial studies on as received cured samples cut from a full-size FWC are reviewed. Inadequacies of as-received and aged samples are pointed out and additional electron microscopic evidence is offered. Characterization of as-received ingredients of HBRF 55A formulation is described. Specifically, Epon 826, Epon 828, EpiRez 5022, RD-2 and various amines, including Tonox and Tonox 60.40, were characterized. Cure kinetics of various formulations are investigated. Changes in physical/thermal properties (viscosity, specific heat, thermal conductivity and density) during cure are described.

  17. Ground truth crop proportion summaries for US segments, 1976-1979

    NASA Technical Reports Server (NTRS)

    Horvath, R. (Principal Investigator); Rice, D.; Wessling, T.

    1981-01-01

    The original ground truth data was collected, digitized, and registered to LANDSAT data for use in the LACIE and AgRISTARS projects. The numerous ground truth categories were consolidated into fewer classes of crops or crop conditions and counted occurrences of these classes for each segment. Tables are presented in which the individual entries are the percentage of total segment area assigned to a given class. The ground truth summaries were prepared from a 20% sample of the scene. An analysis indicates that this size of sample provides sufficient accuracy for use of the data in initial segment screening.

  18. Stoichiometry of Cd(S,Se) nanocrystals by anomalous small-angle x-ray scattering

    NASA Astrophysics Data System (ADS)

    Ramos, Aline; Lyon, Olivier; Levelut, Claire

    1995-12-01

    In Cd(S,Se)-doped glasses the optical properties are strongly dependent on the size of the nanocrystals, but can be also largely modified by changes in the crystal stoichiometry; however, the information on both stoichiometry and size is difficult to obtain in crystals smaller than 10 nm. The intensity scattered at small angles is classically used to get information about nanoparticles sizes. Moreover the variation of amplitude of this intensity with the energy of the x ray—``the anomalous effect''—near the selenium edge is related to stoichiometry. Anomalous small-angle x-ray scattering has been used as a tentative method to get information about stoichiometry in nanocrystals with size lower than 10 nm. Experiments have been performed on samples treated for 2 days at temperatures in the range 540-650 °C. The samples treated at temperatures above 580 °C contain crystals with size larger than 4 nm. For all these samples the anomalous effect has nearly the same amplitude, and we found the stoichiometry x=0.4 for the CdSxSe1-x nanocrystals. This agrees with the previous results obtained by scanning electron microscopy and Raman spectroscopy. The results are also confirmed by measurements of the position of the optical absorption edge and by wide-angle x-ray scattering experiments. For the sample treated at 560 °C, the nanocrystal size is 3 nm and the stoichiometry x=0.6 is deduced from the anomalous effect. For samples treated at lower temperatures the anomalous effect is not observable, indicating an even lower selenium content in the nanocrystals (x≳0.7). We observed differences in the Se content of nanocrystals for different heat treatments of the same initial glass. These results may be very helpful to interpret the change in the optical properties when the temperature of the treatments decreases in the range 560-590 °C. In this temperature range, compositional effects seem to be of the same order of magnitude as the effects of the quantum confinement.

  19. Operationalizing hippocampal volume as an enrichment biomarker for amnestic mild cognitive impairment trials: effect of algorithm, test-retest variability, and cut point on trial cost, duration, and sample size.

    PubMed

    Yu, Peng; Sun, Jia; Wolz, Robin; Stephenson, Diane; Brewer, James; Fox, Nick C; Cole, Patricia E; Jack, Clifford R; Hill, Derek L G; Schwarz, Adam J

    2014-04-01

    The objective of this study was to evaluate the effect of computational algorithm, measurement variability, and cut point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). We used normal control and amnestic MCI subjects from the Alzheimer's Disease Neuroimaging Initiative 1 (ADNI-1) as normative reference and screening cohorts. We evaluated the enrichment performance of 4 widely used hippocampal segmentation algorithms (FreeSurfer, Hippocampus Multi-Atlas Propagation and Segmentation (HMAPS), Learning Embeddings Atlas Propagation (LEAP), and NeuroQuant) in terms of 2-year changes in Mini-Mental State Examination (MMSE), Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and Clinical Dementia Rating Sum of Boxes (CDR-SB). We modeled the implications for sample size, screen fail rates, and trial cost and duration. HCV based patient selection yielded reduced sample sizes (by ∼40%-60%) and lower trial costs (by ∼30%-40%) across a wide range of cut points. These results provide a guide to the choice of HCV cut point for amnestic MCI clinical trials, allowing an informed tradeoff between statistical and practical considerations. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Pore Formation and Mobility Investigation (PPMI): Description and Initial Analysis of Experiments Conducted aboard the International Space Station

    NASA Technical Reports Server (NTRS)

    Grugel, R. N.; Anilkumar, A. V.; Lee, C. P.

    2003-01-01

    Flow visualization experiments during the controlled directional melt back and re-solidification of succinonitrile (SCN) and SCN-water mixtures were conducted using the Pore Formation and Mobility Investigation (PFMI) apparatus in the glovebox facility (GBX) aboard the International Space Station. The study samples were initially 'cast' on earth under 450 millibar of nitrogen into 1 cm ID glass sample tubes approximately 30 cm in length, containing 6 in situ thermocouples. During the Space experiments, the processing parameters and flow visualization settings are remotely monitored and manipulated from the ground Telescience Center (TSC). The ground solidified sample is first subjected to a unidirectional melt back, generally at 10 microns per second, with a constant temperature gradient ahead of the melting interface. Bubbles of different sizes are seen to initiate at the melt interface and, upon release from the melting solid, translate at different speeds in the temperature field ahead of them before coming to rest. Over a period of time these bubbles dissolve into the melt. The gas-laden liquid is then directionally solidified in a controlled manner, generally starting at a rate of 1 micron /sec. Observation and preliminary analysis of bubble formation and mobility in pure SCN samples during melt back and the subsequent structure resulting during gas generation upon re-solidification are presented and discussed.

  1. Pore Formation and Mobility Investigation (PFMI): Description and Initial Analysis of Experiments Conducted aboard the International Space Station

    NASA Technical Reports Server (NTRS)

    Grugel, R. N.; Anilkumar, A. V.; Lee, C. P.

    2002-01-01

    Flow visualization experiments during the controlled directional melt back and re-solidification of succinonitrile (SCN) and SCN-water mixtures were conducted using the Pore Formation and Mobility Investigation (PFMI) apparatus in the glovebox facility (GBX) aboard the International Space Station. The study samples were initially "cast" on earth under 450 millibar of nitrogen into 1 cm ID glass sample tubes approximately 30 cm in length, containing 6 in situ thermocouples. During the Space experiments, the processing parameters and flow visualization settings are remotely monitored and manipulated from the ground Telescience Center (TSC). The ground solidified sample is first subjected to a unidirectional melt back, generally at 10 microns per second, with a constant temperature gradient ahead of the melting interface. Bubbles of different sizes are seen to initiate at the melt interface and, upon release from the melting solid, translate at different speeds in the temperature field ahead of them before coming to rest. Over a period of time these bubbles dissolve into the melt. The gas-laden liquid is then directionally solidified in a controlled manner, generally starting at a rate of 1 micron /sec. Observation and preliminary analysis of bubble formation and mobility in pure SCN samples during melt back and the subsequent structure resulting during gas generation upon re-solidification are presented and discussed.

  2. Development of Automated Objective Meteorological Techniques.

    DTIC Science & Technology

    1980-11-30

    differences are due largely to the nature and spatial distribution of the atmospheric data chosen as input for the model . The data for initial values and...technique. This report fo,-uses on results of theoretical investigations and data analyses performed oy SASC during the period May, 1979 to June, 1980...the sampling period, at a given point in space, the various size particles composing the particle distribution ex- hibit different velocities from each

  3. Using delimiting surveys to characterize the spatiotemporal dynamics facilitates the management of an invasive non-native insect

    Treesearch

    Patrick C. Tobin; Laura M. Blackburn; Rebecca H. Gray; Christopher T. Lettau; Andrew M. Liebhold; Kenneth F. Raffa

    2013-01-01

    The ability to ascertain abundance and spatial extent of a nascent population of a non-native species can inform management decisions. Following initial detection, delimiting surveys, which involve the use of a finer network of samples around the focal point of a newly detected colony, are often used to quantify colony size, spatial extent, and the location of the...

  4. Aggregate size and structure determination of nanomaterials in physiological media: importance of dynamic evolution

    NASA Astrophysics Data System (ADS)

    Afrooz, A. R. M. Nabiul; Hussain, Saber M.; Saleh, Navid B.

    2014-12-01

    Most in vitro nanotoxicological assays are performed after 24 h exposure. However, in determining size and shape effect of nanoparticles in toxicity assays, initial characterization data are generally used to describe experimental outcome. The dynamic size and structure of aggregates are typically ignored in these studies. This brief communication reports dynamic evolution of aggregation characteristics of gold nanoparticles. The study finds that gradual increase in aggregate size of gold nanospheres (AuNS) occurs up to 6 h duration; beyond this time period, the aggregation process deviates from gradual to a more abrupt behavior as large networks are formed. Results of the study also show that aggregated clusters possess unique structural conformation depending on nominal diameter of the nanoparticles. The differences in fractal dimensions of the AuNS samples likely occurred due to geometric differences, causing larger packing propensities for smaller sized particles. Both such observations can have profound influence on dosimetry for in vitro nanotoxicity analyses.

  5. Cryochemical modification, activity, and toxicity of dioxidine

    NASA Astrophysics Data System (ADS)

    Vernaya, O. I.; Shabatin, V. P.; Shabatina, T. I.; Khvatov, D. I.; Semenov, A. M.; Yudina, T. P.; Danilov, V. S.

    2017-02-01

    Dioxidine nanoparticles are prepared via cryochemical modification of the pharmacopoeial dioxidine substance. The form of the cryomodified dioxidine is characterized by data from 1H NMR spectroscopy; X-ray diffraction analysis; such thermal analytical methods as TG and DSC; low-temperature argon adsorption; and transmission electron microscopy. It is shown that the cryomodified samples are synthesized in the form of dioxidine nanocrystals 50-300 nm in size, with a crystal structure differing from that of the initial pharmacopoeial substance. The prepared cryomodified dioxidine nanoparticles inhibit the growth of E. coli 52, S. aureus 144, M. cyaneum 98, and B. cereus 9 better than the initial pharmacopoeial substance, and have comparable chronic toxicity.

  6. Investigating the Use of Ultrasound for Evaluating Aging Wiring Insulation

    NASA Technical Reports Server (NTRS)

    Madaras, Eric I.; Anastasi, Robert F.

    2001-01-01

    This paper reviews our initial efforts to investigate the use of ultrasound to evaluate wire insulation. Our initial model was a solid conductor with heat shrink tubing applied. In this model, various wave modes were identified. Subsequently, several aviation classes of wires (MIL-W- 81381, MIL-W-22759/34, and MIL-W-22759/87) were measured. The wires represented polyimide and ethylene-tetraflouroethylene insulations, and combinations of polyimide and flouropolymer plastics. Wire gages of 12, 16, and 20 AWG sizes were measured. Finally, samples of these wires were subjected to high temperatures for short periods of time to cause the insulation to degrade. Subsequent measurements indicated easily detectable changes.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, K.C.; Noel, D.; Hechler, J.J.

    Data are presented on physicochemical tests carried out on room-temperature aged samples of a commercially available carbon-epoxy composite prepreg system. The analytical methods used included Fourier transform IR (FTIR) spectroscopy, reverse-phase liquid chromatography (RPLC), high-speed RPLC, high-performance size-exclusion chromatography, differential scanning calorimetry and thermogravimetry, and pyrolysis/gas chromatography. All data indicated significant changes in these samples due to aging, with the most sensitive indices being those of FTIR and RPLC procedures. Results indicate that the number of unreacted epoxy groups decreased steadily at a rate of 0.34 percent per day, based on the initial amount, and the number of free amine-hardenermore » molecules decreased at a rate of 1.05 percent per day. The amount of initial epoxy-amine reaction product increased significantly over the first 30 days, but then declined, due to further reactions of these to give higher-molecular-weight products. 23 refs.« less

  8. Further studies on the problems of geomagnetic field intensity determination from archaeological baked clay materials

    NASA Astrophysics Data System (ADS)

    Kostadinova-Avramova, M.; Kovacheva, M.

    2015-10-01

    Archaeological baked clay remains provide valuable information about the geomagnetic field in historical past, but determination of the geomagnetic field characteristics, especially intensity, is often a difficult task. This study was undertaken to elucidate the reasons for unsuccessful intensity determination experiments obtained from two different Bulgarian archaeological sites (Nessebar - Early Byzantine period and Malenovo - Early Iron Age). With this aim, artificial clay samples were formed in the laboratory and investigated. The clay used for the artificial samples preparation differs according to its initial state. Nessebar clay was baked in the antiquity, but Malenovo clay was raw, taken from the clay deposit near the site. The obtained artificial samples were repeatedly heated eight times in known magnetic field to 700 °C. X-ray diffraction analyses and rock-magnetic experiments were performed to obtain information about the mineralogical content and magnetic properties of the initial and laboratory heated clays. Two different protocols were applied for the intensity determination-Coe version of Thellier and Thellier method and multispecimen parallel differential pTRM protocol. Various combinations of laboratory fields and mutual positions of the directions of laboratory field and carried thermoremanence were used in the performed Coe experiment. The obtained results indicate that the failure of this experiment is probably related to unfavourable grain sizes of the prevailing magnetic carriers combined with the chosen experimental conditions. The multispecimen parallel differential pTRM protocol in its original form gives excellent results for the artificial samples, but failed for the real samples (samples coming from previously studied kilns of Nessebar and Malenovo sites). Obviously the strong dependence of this method on the homogeneity of the used subsamples hinders its implementation in its original form for archaeomaterials. The latter are often heterogeneous due to variable heating conditions in the different parts of the archaeological structures. The study draws attention to the importance of multiple heating for the stabilization of grain size distribution in baked clay materials and the need of elucidation of this question.

  9. Effects of laser power density and initial grain size in laser shock punching of pure copper foil

    NASA Astrophysics Data System (ADS)

    Zheng, Chao; Zhang, Xiu; Zhang, Yiliang; Ji, Zhong; Luan, Yiguo; Song, Libin

    2018-06-01

    The effects of laser power density and initial grain size on forming quality of holes in laser shock punching process were investigated in the present study. Three different initial grain sizes as well as three levels of laser power densities were provided, and then laser shock punching experiments of T2 copper foil were conducted. Based upon the experimental results, the characteristics of shape accuracy, fracture surface morphology and microstructures of punched holes were examined. It is revealed that the initial grain size has a noticeable effect on forming quality of holes punched by laser shock. The shape accuracy of punched holes degrades with the increase of grain size. As the laser power density is enhanced, the shape accuracy can be improved except for the case in which the ratio of foil thickness to initial grain size is approximately equal to 1. Compared with the fracture surface morphology in the quasistatic loading conditions, the fracture surface after laser shock can be divided into three zones including rollover, shearing and burr. The distribution of the above three zones strongly relates with the initial grain size. When the laser power density is enhanced, the shearing depth is not increased, but even diminishes in some cases. There is no obvious change of microstructures with the enhancement of laser power density. However, while the initial grain size is close to the foil thickness, single-crystal shear deformation may occur, suggesting that the ratio of foil thickness to initial grain size has an important impact on deformation behavior of metal foil in laser shock punching process.

  10. Fatigue crack initiation and microcrack propagation in X7091 type aluminum P/M alloys

    NASA Astrophysics Data System (ADS)

    Hirose, S.; Fine, M. E.

    1983-06-01

    Fatigu crack initiation in extruded X7091 RSP-P/M aluminum type alloys o°Curs at grain boundaries at both low and high stresses. By a process of elimination this grain boundary embrittlement was attributed to Al2O3 particles formed mainly during atomization and segregated to some grain boundaries. It is not due to the small grain size, to Co2Al9, to η precipitates at grain boundaries, nor to a precipitate free zone. Thermomechanical processing after extrusion of X7091 with 0.8 pct Co was done by Alcoa to produce large recrystallized grains. This resulted in initiation of fatigue cracks at slip bands, and the resistance to initiation of fatigue cracks at low stresses was much greater. Microcrack growth is, however, much faster in the thermomechanically treated samples, as well as in ingot alloys, than in extruded and aged X7091.

  11. Simulation of Powder Layer Deposition in Additive Manufacturing Processes Using the Discrete Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herbold, E. B.; Walton, O.; Homel, M. A.

    2015-10-26

    This document serves as a final report to a small effort where several improvements were added to a LLNL code GEODYN-­L to develop Discrete Element Method (DEM) algorithms coupled to Lagrangian Finite Element (FE) solvers to investigate powder-­bed formation problems for additive manufacturing. The results from these simulations will be assessed for inclusion as the initial conditions for Direct Metal Laser Sintering (DMLS) simulations performed with ALE3D. The algorithms were written and performed on parallel computing platforms at LLNL. The total funding level was 3-­4 weeks of an FTE split amongst two staff scientists and one post-­doc. The DEM simulationsmore » emulated, as much as was feasible, the physical process of depositing a new layer of powder over a bed of existing powder. The DEM simulations utilized truncated size distributions spanning realistic size ranges with a size distribution profile consistent with realistic sample set. A minimum simulation sample size on the order of 40-­particles square by 10-­particles deep was utilized in these scoping studies in order to evaluate the potential effects of size segregation variation with distance displaced in front of a screed blade. A reasonable method for evaluating the problem was developed and validated. Several simulations were performed to show the viability of the approach. Future investigations will focus on running various simulations investigating powder particle sizing and screen geometries.« less

  12. Quantitative endoscopy: initial accuracy measurements.

    PubMed

    Truitt, T O; Adelman, R A; Kelly, D H; Willging, J P

    2000-02-01

    The geometric optics of an endoscope can be used to determine the absolute size of an object in an endoscopic field without knowing the actual distance from the object. This study explores the accuracy of a technique that estimates absolute object size from endoscopic images. Quantitative endoscopy involves calibrating a rigid endoscope to produce size estimates from 2 images taken with a known traveled distance between the images. The heights of 12 samples, ranging in size from 0.78 to 11.80 mm, were estimated with this calibrated endoscope. Backup distances of 5 mm and 10 mm were used for comparison. The mean percent error for all estimated measurements when compared with the actual object sizes was 1.12%. The mean errors for 5-mm and 10-mm backup distances were 0.76% and 1.65%, respectively. The mean errors for objects <2 mm and > or =2 mm were 0.94% and 1.18%, respectively. Quantitative endoscopy estimates endoscopic image size to within 5% of the actual object size. This method remains promising for quantitatively evaluating object size from endoscopic images. It does not require knowledge of the absolute distance of the endoscope from the object, rather, only the distance traveled by the endoscope between images.

  13. Influence of Cu-Cr substitution on structural, morphological, electrical and magnetic properties of magnesium ferrite

    NASA Astrophysics Data System (ADS)

    Yonatan Mulushoa, S.; Murali, N.; Tulu Wegayehu, M.; Margarette, S. J.; Samatha, K.

    2018-03-01

    Cu-Cr substituted magnesium ferrite materials (Mg1 - xCuxCrxFe21 - xO4 with x = 0.0-0.7) have been synthesized by the solid state reaction method. XRD analysis revealed the prepared samples are cubic spinel with single phase face centered cubic. A significant decrease of ∼41.15 nm in particle size is noted in response to the increase in Cu-Cr substitution level. The room temperature resistivity increases gradually from 0.553 × 105 Ω cm (x = 0.0) to 0.105 × 108 Ω cm (x = 0.7). Temperature dependent DC-electrical resistivity of all the samples, exhibits semiconductor like behavior. Cu-Cr doped materials can be suitable to limit the eddy current losses. VSM result shows pure and doped magnesium ferrite particles show soft ferrimagnetic nature at room temperature. The saturation magnetization of the samples decreases initially from 34.5214 emu/g for x = 0.0 to 18.98 emu/g (x = 0.7). Saturation magnetization, remanence and coercivity are decreased with doping, which may be due to the increase in grain size.

  14. Fully Flexible Docking of Medium Sized Ligand Libraries with RosettaLigand

    PubMed Central

    DeLuca, Samuel; Khar, Karen; Meiler, Jens

    2015-01-01

    RosettaLigand has been successfully used to predict binding poses in protein-small molecule complexes. However, the RosettaLigand docking protocol is comparatively slow in identifying an initial starting pose for the small molecule (ligand) making it unfeasible for use in virtual High Throughput Screening (vHTS). To overcome this limitation, we developed a new sampling approach for placing the ligand in the protein binding site during the initial ‘low-resolution’ docking step. It combines the translational and rotational adjustments to the ligand pose in a single transformation step. The new algorithm is both more accurate and more time-efficient. The docking success rate is improved by 10–15% in a benchmark set of 43 protein/ligand complexes, reducing the number of models that typically need to be generated from 1000 to 150. The average time to generate a model is reduced from 50 seconds to 10 seconds. As a result we observe an effective 30-fold speed increase, making RosettaLigand appropriate for docking medium sized ligand libraries. We demonstrate that this improved initial placement of the ligand is critical for successful prediction of an accurate binding position in the ‘high-resolution’ full atom refinement step. PMID:26207742

  15. Is Political Activism on Social Media an initiator of Psychological Stress?

    PubMed

    Hisam, Aliya; Safoor, Iqra; Khurshid, Nawal; Aslam, Aakash; Zaid, Farhan; Muzaffar, Ayesha

    2017-01-01

    To find out the association of psychological stress with political activism on social networking sites (SNS) in adults. To find association of psychological stress and political activism with age, gender and occupational status. A descriptive cross-sectional study of 8 months (Aug 2014 to March 2015) was conducted on young adults between age group of 20-40 years of different universities of Rawalpindi, Pakistan. Closed ended standardized questionnaires (i.e. Cohen Perceived Stress-10) were distributed via non-probability convenient sampling among a total sample size of 237. Sample size was calculated using WHO sample size calculator and data was analyzed in STATA version 12. The mean age of participants was 21.06±1.425 years. Out of the 237 participants, 150 (63.3%) were males and 87 (36.7%) females. Regarding their occupation, 13 (51.9%) were military cadets, 8 (3.4%) were consultant, 47 (19.8%) medical officer, 3 (1.3%) PG students and 56 (23.6%) MBBS students. Significant association of occupation was established with both political activism and psychological stress (p=0.4 and p=0.002 respectively). Among 237 individuals, 91 (38.4%) were stressed out and 146 (61.6%) were not. Among whole sample, political activists on SNS were found to be 23 (9.7%). Out of these 23 individuals who were politically active, 15 (65.2%) were stressed out and 8 (34.7%) were not. A significant association between stress and political activism was established (p=0.005). Political activism via social networking sites is playing significant role on adult person's mental health in terms of stress among different occupation.

  16. Multiscale Simulation of Porous Ceramics Based on Movable Cellular Automaton Method

    NASA Astrophysics Data System (ADS)

    Smolin, A.; Smolin, I.; Eremina, G.; Smolina, I.

    2017-10-01

    The paper presents a model for simulating mechanical behaviour of multiscale porous ceramics based on movable cellular automaton method, which is a novel particle method in computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the random unique position in space. As a result, we get the average values of Young’s modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behaviour at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via the effective properties determined at the previous scale level. If the pore size distribution function of the material has N maxima we need to perform computations for N - 1 levels in order to get the properties from the lowest scale up to the macroscale step by step. The proposed approach was applied to modelling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behaviour of the model sample at the macroscale.

  17. Effect of initial grain size on inhomogeneous plastic deformation and twinning behavior in high manganese austenitic steel with a polycrystalline microstructure

    NASA Astrophysics Data System (ADS)

    Ueji, R.; Tsuchida, N.; Harada, K.; Takaki, K.; Fujii, H.

    2015-08-01

    The grain size effect on the deformation twinning in a high manganese austenitic steel which is so-called TWIP (twining induced plastic deformation) steel was studied in order to understand how to control deformation twinning. The 31wt%Mn-3%Al-3% Si steel was cold rolled and annealed at various temperatures to obtain fully recrystallized structures with different mean grain sizes. These annealed sheets were examined by room temperature tensile tests at a strain rate of 10-4/s. The coarse grained sample (grain size: 49.6μm) showed many deformation twins and the deformation twinning was preferentially found in the grains in which the tensile axis is parallel near to [111]. On the other hand, the sample with finer grains (1.8 μm) had few grains with twinning even after the tensile deformation. The electron back scattering diffraction (EB SD) measurements clarified the relationship between the anisotropy of deformation twinning and that of inhomogeneous plastic deformation. Based on the EBSD analysis, the mechanism of the suppression of deformation twinning by grain refinement was discussed with the concept of the slip pattern competition between the slip system governed by a grain boundary and that activated by the macroscopic load.

  18. Concentration and purification of HIV-1 virions by microfluidic separation of superparamagnetic nanoparticles

    PubMed Central

    Chen, Grace Dongqing; Alberts, Catharina Johanna

    2009-01-01

    The low concentration and complex sample matrix of many clinical and environmental viral samples presents a significant challenge in the development of low cost, point-of-care viral assays. To address this problem, we investigated the use of a microfluidic passive magnetic separator combined with on-chip mixer to both purify and concentrate whole particle HIV-1 virions. Virus-containing plasma samples are first mixed to allow specific binding of the viral particles with antibody-conjugated superparamagnetic nanoparticles, and several passive mixer geometries were assessed for their mixing efficiencies. The virus-nanoparticle complexes are then separated from the plasma in a novel magnetic separation chamber, where packed micron-sized ferromagnetic particles serve as high magnetic gradient concentrators for an externally applied magnetic field. Thereafter, a viral lysis buffer was flowed through the chip and the released HIV proteins were assayed off-chip. Viral protein extraction efficiencies of 62% and 45% were achieved at 10uL/min and 30uL/min throughputs respectively. More importantly, an 80-fold concentration was observed for an initial sample volume of 1mL, and a 44-fold concentration for an initial sample volume of 0.5mL. The system is broadly applicable to microscale sample preparation of any viral sample and can be used for nucleic acid extraction as well as 40–80 fold enrichment of target viruses. PMID:19954210

  19. Bending fatigue study of nickel-titanium Gates Glidden drills.

    PubMed

    Luebke, Neill H; Brantley, William A; Alapati, Satish B; Mitchell, John C; Lausten, Leonard L; Daehn, Glenn S

    2005-07-01

    ProFile nickel-titanium Gates Glidden drills were tested in bending fatigue to simulate clinical conditions. Ten samples each in sizes #1 through #6 were placed in a device that deflected the drill head 4 mm from the axis. The drill head was placed inside a ball bearing fixture, which allowed it to run free at 4000 rpm, and the total number of revolutions was recorded until failure. Fracture surfaces were examined with a scanning electron microscope to determine the initiation site and nature of the failure process. Mean +/- SD for the number of revolutions to failure for the drill sizes were: #1: 1826.3 +/- 542.5; #2: 5395.7 +/- 2581.5; #3: 694.4 +/- 516.8; #4: 261.0 +/- 138.0; #5: 49.6 +/- 14.9; #6: 195.9 +/- 78.5. All drills failed in a ductile mode, and fracture initiation sites appeared to be coincident with machining grooves or other flaws, suggesting the need for improved manufacturing procedures.

  20. Enhancing Psychosis-Spectrum Nosology Through an International Data Sharing Initiative.

    PubMed

    Docherty, Anna R; Fonseca-Pedrero, Eduardo; Debbané, Martin; Chan, Raymond C K; Linscott, Richard J; Jonas, Katherine G; Cicero, David C; Green, Melissa J; Simms, Leonard J; Mason, Oliver; Watson, David; Ettinger, Ulrich; Waszczuk, Monika; Rapp, Alexander; Grant, Phillip; Kotov, Roman; DeYoung, Colin G; Ruggero, Camilo J; Eaton, Nicolas R; Krueger, Robert F; Patrick, Christopher; Hopwood, Christopher; O'Neill, F Anthony; Zald, David H; Conway, Christopher C; Adkins, Daniel E; Waldman, Irwin D; van Os, Jim; Sullivan, Patrick F; Anderson, John S; Shabalin, Andrey A; Sponheim, Scott R; Taylor, Stephan F; Grazioplene, Rachel G; Bacanu, Silviu A; Bigdeli, Tim B; Haenschel, Corinna; Malaspina, Dolores; Gooding, Diane C; Nicodemus, Kristin; Schultze-Lutter, Frauke; Barrantes-Vidal, Neus; Mohr, Christine; Carpenter, William T; Cohen, Alex S

    2018-05-16

    The latent structure of schizotypy and psychosis-spectrum symptoms remains poorly understood. Furthermore, molecular genetic substrates are poorly defined, largely due to the substantial resources required to collect rich phenotypic data across diverse populations. Sample sizes of phenotypic studies are often insufficient for advanced structural equation modeling approaches. In the last 50 years, efforts in both psychiatry and psychological science have moved toward (1) a dimensional model of psychopathology (eg, the current Hierarchical Taxonomy of Psychopathology [HiTOP] initiative), (2) an integration of methods and measures across traits and units of analysis (eg, the RDoC initiative), and (3) powerful, impactful study designs maximizing sample size to detect subtle genomic variation relating to complex traits (the Psychiatric Genomics Consortium [PGC]). These movements are important to the future study of the psychosis spectrum, and to resolving heterogeneity with respect to instrument and population. The International Consortium of Schizotypy Research is composed of over 40 laboratories in 12 countries, and to date, members have compiled a body of schizotypy- and psychosis-related phenotype data from more than 30000 individuals. It has become apparent that compiling data into a protected, relational database and crowdsourcing analytic and data science expertise will result in significant enhancement of current research on the structure and biological substrates of the psychosis spectrum. The authors present a data-sharing infrastructure similar to that of the PGC, and a resource-sharing infrastructure similar to that of HiTOP. This report details the rationale and benefits of the phenotypic data collective and presents an open invitation for participation.

  1. Sensitivity of Beam Parameters to a Station C Solenoid Scan on Axis II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulze, Martin E.

    Magnet scans are a standard technique for determining beam parameters in accelerators. Beam parameters are inferred from spot size measurements using a model of the beam optics. The sensitivity of the measured beam spot size to the beam parameters is investigated for typical DARHT Axis II beam energies and currents. In a typical S4 solenoid scan, the downstream transport is tuned to achieve a round beam at Station C with an envelope radius of about 1.5 cm with a very small divergence with S4 off. The typical beam energy and current are 16.0 MeV and 1.625 kA. Figures 1-3 showmore » the sensitivity of the bean size at Station C to the emittance, initial radius and initial angle respectively. To better understand the relative sensitivity of the beam size to the emittance, initial radius and initial angle, linear regressions were performed for each parameter as a function of the S4 setting. The results are shown in Figure 4. The measured slope was scaled to have a maximum value of 1 in order to present the relative sensitivities in a single plot. Figure 4 clearly shows the beam size at the minimum of the S4 scan is most sensitive to emittance and relatively insensitive to initial radius and angle as expected. The beam emittance is also very sensitive to the beam size of the converging beam and becomes insensitive to the beam size of the diverging beam. Measurements of the beam size of the diverging beam provide the greatest sensitivity to the initial beam radius and to a lesser extent the initial beam angle. The converging beam size is initially very sensitive to the emittance and initial angle at low S4 currents. As the S4 current is increased the sensitivity to the emittance remains strong while the sensitivity to the initial angle diminishes.« less

  2. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  3. Fully automatic characterization and data collection from crystals of biological macromolecules.

    PubMed

    Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander; Nurizzo, Didier; Bowler, Matthew W

    2015-08-01

    Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to the optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.

  4. Application of magnetic techniques to lateral hydrocarbon migration - Lower Tertiary reservoir systems, UK North Sea

    NASA Astrophysics Data System (ADS)

    Badejo, S. A.; Muxworthy, A. R.; Fraser, A.

    2017-12-01

    Pyrolysis experiments show that magnetic minerals can be produced inorganically during oil formation in the `oil-kitchen'. Here we try to identify a magnetic proxy that can be used to trace hydrocarbon migration pathways by determining the morphology, abundance, mineralogy and size of the magnetic minerals present in reservoirs. We address this by examining the Tay formation in the Western Central Graben in the North Sea. The Tertiary sandstones are undeformed and laterally continuous in the form of an east-west trending channel, facilitating long distance updip migration of oil and gas to the west. We have collected 179 samples from 20 oil-stained wells and 15 samples from three dry wells from the British Geological Survey Core Repository. Samples were selected based on geological observations (water-wet sandstone, oil-stained sandstone, siltstones and shale). The magnetic properties of the samples were determined using room-temperature measurements on a Vibrating Sample Magnetometer (VSM), low-temperature (0-300K) measurements on a Magnetic Property Measurement System (MPMS) and high-temperature (300-973K) measurements on a Kappabridge susceptibility meter. We identified magnetite, pyrrhotite, pyrite and siderite in the samples. An increasing presence of ferrimagnetic iron sulphides is noticed along the known hydrocarbon migration pathway. Our initial results suggest mineralogy coupled with changes in grain size are possible proxies for hydrocarbon migration.

  5. Smoking initiation among young adults in the United States and Canada, 1998-2010: a systematic review.

    PubMed

    Freedman, Kit S; Nelson, Nanette M; Feldman, Laura L

    2012-01-01

    Young adults have the highest smoking rate of any age group in the United States and Canada, and recent data indicate that they often initiate smoking as young adults. The objective of this study was to systematically review peer-reviewed articles on cigarette smoking initiation and effective prevention efforts among young adults. We searched 5 databases for research articles published in English between 1998 and 2010 on smoking initiation among young adults (aged 18-25) living in the United States or Canada. We extracted the following data from each study selected: the measure of initiation used, age range of initiation, age range of study population, data source, target population, sampling method, and sample size. We summarized the primary findings of each study according to 3 research questions and categories of data (eg, sociodemographic) that emerged during the data extraction process. Of 1,072 identified studies, we found 27 articles that met our search criteria, but several included a larger age range of initiation (eg, 18-30, 18-36) than we initially intended to include. Disparities in young adult smoking initiation existed according to sex, race, and educational attainment. The use of alcohol and illegal drugs was associated with smoking initiation. The risk of smoking initiation among young adults increased under the following circumstances: exposure to smoking, boredom or stress while serving in the military, attending tobacco-sponsored social events while in college, and exposure to social norms and perceptions that encourage smoking. Effective prevention efforts include exposure to counter-marketing, denormalization campaigns, taxation, and the presence of smoke-free policies. Much remains to be learned about young adult smoking initiation, particularly among young adults in the straight-to-work population. Dissimilar measures of smoking initiation limit our knowledge about smoking initiation among young adults. We recommend developing a standardized measure of initiation that indicates progression to regular established smoking.

  6. Suitability of river delta sediment as proppant, Missouri and Niobrara Rivers, Nebraska and South Dakota, 2015

    USGS Publications Warehouse

    Zelt, Ronald B.; Hobza, Christopher M.; Burton, Bethany L.; Schaepe, Nathaniel J.; Piatak, Nadine

    2017-11-16

    Sediment management is a challenge faced by reservoir managers who have several potential options, including dredging, for mitigation of storage capacity lost to sedimentation. As sediment is removed from reservoir storage, potential use of the sediment for socioeconomic or ecological benefit could potentially defray some costs of its removal. Rivers that transport a sandy sediment load will deposit the sand load along a reservoir-headwaters reach where the current of the river slackens progressively as its bed approaches and then descends below the reservoir water level. Given a rare combination of factors, a reservoir deposit of alluvial sand has potential to be suitable for use as proppant for hydraulic fracturing in unconventional oil and gas development. In 2015, the U.S. Geological Survey began a program of researching potential sources of proppant sand from reservoirs, with an initial focus on the Missouri River subbasins that receive sand loads from the Nebraska Sand Hills. This report documents the methods and results of assessments of the suitability of river delta sediment as proppant for a pilot study area in the delta headwaters of Lewis and Clark Lake, Nebraska and South Dakota. Results from surface-geophysical surveys of electrical resistivity guided borings to collect 3.7-meter long cores at 25 sites on delta sandbars using the direct-push method to recover duplicate, 3.8-centimeter-diameter cores in April 2015. In addition, the U.S. Geological Survey collected samples of upstream sand sources in the lower Niobrara River valley.At the laboratory, samples were dried, weighed, washed, dried, and weighed again. Exploratory analysis of natural sand for determining its suitability as a proppant involved application of a modified subset of the standard protocols known as American Petroleum Institute (API) Recommended Practice (RP) 19C. The RP19C methods were not intended for exploration-stage evaluation of raw materials. Results for the washed samples are not directly applicable to evaluations of suitability for use as fracture sand because, except for particle-size distribution, the API-recommended practices for assessing proppant properties (sphericity, roundness, bulk density, and crush resistance) require testing of specific proppant size classes. An optical imaging particle-size analyzer was used to make measurements of particle-size distribution and particle shape. Measured samples were sieved to separate the dominant-size fraction, and the separated subsample was further tested for roundness, sphericity, bulk density, and crush resistance.For the bulk washed samples collected from the Missouri River delta, the geometric mean size averaged 0.27 millimeters (mm), 80 percent of the samples were predominantly sand in the API 40/70 size class, and 17 percent were predominantly sand in the API 70/140 size class. Distributions of geometric mean size among the four sandbar complexes were similar, but samples collected from sandbar complex B were slightly coarser sand than those from the other three complexes. The average geometric mean sizes among the four sandbar complexes ranged only from 0.26 to 0.30 mm. For 22 main-stem sampling locations along the lower Niobrara River, geometric mean size averaged 0.26 mm, an average of 61 percent was sand in the API 40/70 size class, and 28 percent was sand in the API 70/140 size class. Average composition for lower Niobrara River samples was 48 percent medium sand, 37 percent fine sand, and about 7 percent each very fine sand and coarse sand fractions. On average, samples were moderately well sorted.Particle shape and strength were assessed for the dominant-size class of each sample. For proppant strength, crush resistance was tested at a predetermined level of stress (34.5 megapascals [MPa], or 5,000 pounds-force per square inch). To meet the API minimum requirement for proppant, after the crush test not more than 10 percent of the tested sample should be finer than the precrush dominant-size class. For particle shape, all samples surpassed the recommended minimum criteria for sphericity and roundness, with most samples being well-rounded. For proppant strength, of 57 crush-resistance tested Missouri River delta samples of 40/70-sized sand, 23 (40 percent) were interpreted as meeting the minimum criterion at 34.5 MPa, or 5,000 pounds-force per square inch. Of 12 tested samples of 70/140-sized sand, 9 (75 percent) of the Missouri River delta samples had less than 10 percent fines by volume following crush testing, achieving the minimum criterion at 34.5 MPa. Crush resistance for delta samples was strongest at sandbar complex A, where 67 percent of tested samples met the 10-percent fines criterion at the 34.5-MPa threshold. This frequency was higher than was indicated by samples from sandbar complexes B, C, and D that had rates of 50, 46, and 42 percent, respectively. The group of sandbar complex A samples also contained the largest percentages of samples dominated by the API 70/140 size class, which overall had a higher percentage of samples meeting the minimum criterion compared to samples dominated by coarser size classes; however, samples from sandbar complex A that had the API 40/70 size class tested also had a higher rate for meeting the minimum criterion (57 percent) than did samples from sandbar complexes B, C, and D (50, 43, and 40 percent, respectively). For samples collected along the lower Niobrara River, of the 25 tested samples of 40/70-sized sand, 9 samples passed the API minimum criterion at 34.5 MPa, but only 3 samples passed the more-stringent criterion of 8 percent postcrush fines. All four tested samples of 70/140 sand passed the minimum criterion at 34.5 MPa, with postcrush fines percentage of at most 4.1 percent.For two reaches of the lower Niobrara River, where hydraulic sorting was energized artificially by the hydraulic head drop at and immediately downstream from Spencer Dam, suitability of channel deposits for potential use as fracture sand was confirmed by test results. All reach A washed samples were well-rounded and had sphericity scores above 0.65, and samples for 80 percent of sampled locations met the crush-resistance criterion at the 34.5-MPa stress level. A conservative lower-bound estimate of sand volume in the reach A deposits was about 86,000 cubic meters. All reach B samples were well-rounded but sphericity averaged 0.63, a little less than the average for upstream reaches A and SP. All four samples tested passed the crush-resistance test at 34.5 MPa. Of three reach B sandbars, two had no more than 3 percent fines after the crush test, surpassing more stringent criteria for crush resistance that accept a maximum of 6 percent fines following the crush test for the API 70/140 size class.Relative to the crush-resistance test results for the API 40/70 size fraction of two samples of mine output from Loup River settling-basin dredge spoils near Genoa, Nebr., four of five reach A sample locations compared favorably. The four samples had increases in fines composition of 1.6–5.9 percentage points, whereas fines in the two mine-output samples increased by an average 6.8 percentage points.

  7. The efficacy of respondent-driven sampling for the health assessment of minority populations.

    PubMed

    Badowski, Grazyna; Somera, Lilnabeth P; Simsiman, Brayan; Lee, Hye-Ryeon; Cassel, Kevin; Yamanaka, Alisha; Ren, JunHao

    2017-10-01

    Respondent driven sampling (RDS) is a relatively new network sampling technique typically employed for hard-to-reach populations. Like snowball sampling, initial respondents or "seeds" recruit additional respondents from their network of friends. Under certain assumptions, the method promises to produce a sample independent from the biases that may have been introduced by the non-random choice of "seeds." We conducted a survey on health communication in Guam's general population using the RDS method, the first survey that has utilized this methodology in Guam. It was conducted in hopes of identifying a cost-efficient non-probability sampling strategy that could generate reasonable population estimates for both minority and general populations. RDS data was collected in Guam in 2013 (n=511) and population estimates were compared with 2012 BRFSS data (n=2031) and the 2010 census data. The estimates were calculated using the unweighted RDS sample and the weighted sample using RDS inference methods and compared with known population characteristics. The sample size was reached in 23days, providing evidence that the RDS method is a viable, cost-effective data collection method, which can provide reasonable population estimates. However, the results also suggest that the RDS inference methods used to reduce bias, based on self-reported estimates of network sizes, may not always work. Caution is needed when interpreting RDS study findings. For a more diverse sample, data collection should not be conducted in just one location. Fewer questions about network estimates should be asked, and more careful consideration should be given to the kind of incentives offered to participants. Copyright © 2017. Published by Elsevier Ltd.

  8. On the Kaolinite Floc Size at the Steady State of Flocculation in a Turbulent Flow

    PubMed Central

    Zhu, Zhongfan; Wang, Hongrui; Yu, Jingshan; Dou, Jie

    2016-01-01

    The flocculation of cohesive fine-grained sediment plays an important role in the transport characteristics of pollutants and nutrients absorbed on the surface of sediment in estuarine and coastal waters through the complex processes of sediment transport, deposition, resuspension and consolidation. Many laboratory experiments have been carried out to investigate the influence of different flow shear conditions on the floc size at the steady state of flocculation in the shear flow. Most of these experiments reported that the floc size decreases with increasing shear stresses and used a power law to express this dependence. In this study, we performed a Couette-flow experiment to measure the size of the kaolinite floc through sampling observation and an image analysis system at the steady state of flocculation under six flow shear conditions. The results show that the negative correlation of the floc size on the flow shear occurs only at high shear conditions, whereas at low shear conditions, the floc size increases with increasing turbulent shear stresses regardless of electrolyte conditions. Increasing electrolyte conditions and the initial particle concentration could lead to a larger steady-state floc size. PMID:26901652

  9. On the Kaolinite Floc Size at the Steady State of Flocculation in a Turbulent Flow.

    PubMed

    Zhu, Zhongfan; Wang, Hongrui; Yu, Jingshan; Dou, Jie

    2016-01-01

    The flocculation of cohesive fine-grained sediment plays an important role in the transport characteristics of pollutants and nutrients absorbed on the surface of sediment in estuarine and coastal waters through the complex processes of sediment transport, deposition, resuspension and consolidation. Many laboratory experiments have been carried out to investigate the influence of different flow shear conditions on the floc size at the steady state of flocculation in the shear flow. Most of these experiments reported that the floc size decreases with increasing shear stresses and used a power law to express this dependence. In this study, we performed a Couette-flow experiment to measure the size of the kaolinite floc through sampling observation and an image analysis system at the steady state of flocculation under six flow shear conditions. The results show that the negative correlation of the floc size on the flow shear occurs only at high shear conditions, whereas at low shear conditions, the floc size increases with increasing turbulent shear stresses regardless of electrolyte conditions. Increasing electrolyte conditions and the initial particle concentration could lead to a larger steady-state floc size.

  10. Appraising the Corporate Sustainability Reports - Text Mining and Multi-Discriminatory Analysis

    NASA Astrophysics Data System (ADS)

    Modapothala, J. R.; Issac, B.; Jayamani, E.

    The voluntary disclosure of the sustainability reports by the companies attracts wider stakeholder groups. Diversity in these reports poses challenge to the users of information and regulators. This study appraises the corporate sustainability reports as per GRI (Global Reporting Initiative) guidelines (the most widely accepted and used) across all industrial sectors. Text mining is adopted to carry out the initial analysis with a large sample size of 2650 reports. Statistical analyses were performed for further investigation. The results indicate that the disclosures made by the companies differ across the industrial sectors. Multivariate Discriminant Analysis (MDA) shows that the environmental variable is a greater significant contributing factor towards explanation of sustainability report.

  11. The structure of Turkish trait-descriptive adjectives.

    PubMed

    Somer, O; Goldberg, L R

    1999-03-01

    This description of the Turkish lexical project reports some initial findings on the structure of Turkish personality-related variables. In addition, it provides evidence on the effects of target evaluative homogeneity vs. heterogeneity (e.g., samples of well-liked target individuals vs. samples of both liked and disliked targets) on the resulting factor structures, and thus it provides a first test of the conclusions reached by D. Peabody and L. R. Goldberg (1989) using English trait terms. In 2 separate studies, and in 2 types of data sets, clear versions of the Big Five factor structure were found. And both studies replicated and extended the findings of Peabody and Goldberg; virtually orthogonal factors of relatively equal size were found in the homogeneous samples, and a more highly correlated set of factors with relatively large Agreeableness and Conscientiousness dimensions was found in the heterogeneous samples.

  12. Nano-Pore Size Analysis by SAXS Method of Cementitious Mortars Undergoing Delayed Ettringite Formation

    NASA Astrophysics Data System (ADS)

    Shekar, Yamini

    This research investigates the nano-scale pore structure of cementitious mortars undergoing delayed ettringite formation (DEF) using small angle x-ray scattering (SAXS). DEF has been known to cause expansion and cracking during later ages (around 4000 days) in concrete that has been heat cured at temperatures of 70°C or above. Though DEF normally occurs in heat cured concrete, mass cured concrete can also experience DEF. Large crystalline pressures result in smaller pore sizes. The objectives of this research are: (1) to investigate why some samples expand early than later expansion, (2) to evaluate the effects of curing conditions and pore size distributions at high temperatures, and (3) to assess the evolution of the pore size distributions over time. The most important outcome of the research is the pore sizes obtained from SAXS were used in the development of a 3-stage model. From the data obtained, the pore sizes increase in stage 1 due to initial ettringite formation and in turn filling up the smallest pores. Once the critical pore size threshold is reached (around 20nm) stage 2 is formed due to cracking which tends to decrease in the pore sizes. Finally, in stage 3, the cracking continues, therefore increasing in the pore size.

  13. Localized Ignition And Subsequent Flame Spread Over Solid Fuels In Microgravity

    NASA Technical Reports Server (NTRS)

    Kashiwagi, T.; Nakamura, Y.; Prasad, K.; Baum, H.; Olson, S.; Fujita, O.; Nishizawa, K.; Ito, K.

    2003-01-01

    Localized ignition is initiated by an external radiant source at the middle of a thin solid sheet under external slow flow, simulating fire initiation in a spacecraft with a slow ventilation flow. Ignition behavior, subsequent transition simultaneously to upstream and downstream flame spread, and flame growth behavior are studied theoretically and experimentally. There are two transition stages in this study; one is the first transition from the onset of the ignition to form an initial anchored flame close to the sample surface, near the ignited area. The second transition is the flame growth stage from the anchored flame to a steady fire spread state (i.e. no change in flame size or in heat release rate) or a quasi-steady state, if either exists. Observations of experimental spot ignition characteristics and of the second transition over a thermally thin paper were made to determine the effects of external flow velocity. Both transitions have been studied theoretically to determine the effects of the confinement by a relatively small test chamber, of the ignition configuration (ignition across the sample width vs spot ignition), and of the external flow velocity on the two transitions over a thermally thin paper. This study is currently extending to two new areas; one is to include a thermoplastic sample such poly(methymethacrylate), PMMA, and the other is to determine the effects of sample thickness on the transitions. The recent results of these new studies on the first transition are briefly reported.

  14. Characterization of the porosity of human dental enamel and shear bond strength in vitro after variable etch times: initial findings using the BET method.

    PubMed

    Nguyen, Trang T; Miller, Arthur; Orellana, Maria F

    2011-07-01

    (1) To quantitatively characterize human enamel porosity and surface area in vitro before and after etching for variable etching times; and (2) to evaluate shear bond strength after variable etching times. Specifically, our goal was to identify the presence of any correlation between enamel porosity and shear bond strength. Pore surface area, pore volume, and pore size of enamel from extracted human teeth were analyzed by Brunauer-Emmett-Teller (BET) gas adsorption before and after etching for 15, 30, and 60 seconds with 37% phosphoric acid. Orthodontic brackets were bonded with Transbond to the samples with variable etch times and were subsequently applied to a single-plane lap shear testing system. Pore volume and surface area increased after etching for 15 and 30 seconds. At 60 seconds, this increase was less pronounced. On the contrary, pore size appears to decrease after etching. No correlation was found between variable etching times and shear strength. Samples etched for 15, 30, and 60 seconds all demonstrated clinically viable shear strength values. The BET adsorption method could be a valuable tool in enhancing our understanding of enamel characteristics. Our findings indicate that distinct quantitative changes in enamel pore architecture are evident after etching. Further testing with a larger sample size would have to be carried out for more definitive conclusions to be made.

  15. The Long-Term Oxygen Treatment Trial for Chronic Obstructive Pulmonary Disease: Rationale, Design, and Lessons Learned.

    PubMed

    Yusen, Roger D; Criner, Gerard J; Sternberg, Alice L; Au, David H; Fuhlbrigge, Anne L; Albert, Richard K; Casaburi, Richard; Stoller, James K; Harrington, Kathleen F; Cooper, J Allen D; Diaz, Philip; Gay, Steven; Kanner, Richard; MacIntyre, Neil; Martinez, Fernando J; Piantadosi, Steven; Sciurba, Frank; Shade, David; Stibolt, Thomas; Tonascia, James; Wise, Robert; Bailey, William C

    2018-01-01

    The Long-Term Oxygen Treatment Trial demonstrated that long-term supplemental oxygen did not reduce time to hospital admission or death for patients who have stable chronic obstructive pulmonary disease and resting and/or exercise-induced moderate oxyhemoglobin desaturation, nor did it provide benefit for any other outcome measured in the trial. Nine months after initiation of patient screening, after randomization of 34 patients to treatment, a trial design amendment broadened the eligible population, expanded the primary outcome, and reduced the goal sample size. Within a few years, the protocol underwent minor modifications, and a second trial design amendment lowered the required sample size because of lower than expected treatment group crossover rates. After 5.5 years of recruitment, the trial met its amended sample size goal, and 1 year later, it achieved its follow-up goal. The process of publishing the trial results brought renewed scrutiny of the study design and the amendments. This article expands on the previously published design and methods information, provides the rationale for the amendments, and gives insight into the investigators' decisions about trial conduct. The story of the Long-Term Oxygen Treatment Trial may assist investigators in future trials, especially those that seek to assess the efficacy and safety of long-term oxygen therapy. Clinical trial registered with clinicaltrials.gov (NCT00692198).

  16. Effect of Initial Microstructure on Impact Toughness of 1200 MPa-Class High Strength Steel with Ultrafine Elongated Grain Structure

    NASA Astrophysics Data System (ADS)

    Jafari, Meysam; Garrison, Warren M.; Tsuzaki, Kaneaki

    2014-02-01

    A medium-carbon low-alloy steel was prepared with initial structures of either martensite or bainite. For both initial structures, warm caliber-rolling was conducted at 773 K (500 °C) to obtain ultrafine elongated grain (UFEG) structures with strong <110>//rolling direction (RD) fiber deformation textures. The UFEG structures consisted of spheroidal cementite particles distributed uniformly in a ferrite matrix of a transverse grain size of about 331 and 311 nm in samples with initial martensite and bainite structures, respectively. For both initial structures, the UFEG materials had similar tensile properties, upper shelf energy (145 J), and ductile-to-brittle transition temperatures 98 K (500 °C). Obtaining the martensitic structure requires more rapid cooling than is needed to obtain the bainitic structure and this more rapid cooling promote cracking. As the UFEG structures obtained from initial martensitic and bainitic structures have almost identical properties, but obtaining the bainitic structure does not require a rapid cooling which promotes cracking suggests the use of a bainitic structure in obtaining UFEG structures should be examined further.

  17. HIV prevalence among men who have sex with men in Brazil: results of the 2nd national survey using respondent-driven sampling.

    PubMed

    Kerr, Ligia; Kendall, Carl; Guimarães, Mark Drew Crosland; Salani Mota, Rosa; Veras, Maria Amélia; Dourado, Inês; Maria de Brito, Ana; Merchan-Hamann, Edgar; Pontes, Alexandre Kerr; Leal, Andréa Fachel; Knauth, Daniela; Castro, Ana Rita Coimbra Motta; Macena, Raimunda Hermelinda Maia; Lima, Luana Nepomuceno Costa; Oliveira, Lisangela Cristina; Cavalcantee, Maria do Socorro; Benzaken, Adele Schwartz; Pereira, Gerson; Pimenta, Cristina; Pascom, Ana Roberta Pati; Bermudez, Ximena Pamela Diaz; Moreira, Regina Célia; Brígido, Luis Fernando Macedo; Camillo, Ana Cláudia; McFarland, Willi; Johnston, Lisa G

    2018-05-01

    This paper reports human immuno-deficiency virus (HIV) prevalence in the 2nd National Biological and Behavioral Surveillance Survey (BBSS) among men who have sex with men (MSM) in 12 cities in Brazil using respondent-driven sampling (RDS).Following formative research, RDS was applied in 12 cities in the 5 macroregions of Brazil between June and December 2016 to recruit MSM for BBSS. The target sample size was 350 per city. Five to 6 seeds were initially selected to initiate recruitment and coupons and interviews were managed online. On-site rapid testing was used for HIV screening, and confirmed by a 2nd test. Participants were weighted using Gile estimator. Data from all 12 cities were merged and analyzed with Stata 14.0 complex survey data analysis tools in which each city was treated as its own strata. Missing data for those who did not test were imputed HIV+ if they reported testing positive before and were taking antiretroviral therapy.A total of 4176 men were recruited in the 12 cities. The average time to completion was 10.2 weeks. The longest chain length varied from 8 to 21 waves. The sample size was achieved in all but 2 cities.A total of 3958 of the 4176 respondents agreed to test for HIV (90.2%). For results without imputation, 17.5% (95%CI: 14.7-20.7) of our sample was HIV positive. With imputation, 18.4% (95%CI: 15.4-21.7) were seropositive.HIV prevalence increased beyond expectations from the results of the 2009 survey (12.1%; 95%CI: 10.0-14.5) to 18.4%; CI95%: 15.4 to 21.7 in 2016. This increase accompanies Brazil's focus on the treatment to prevention strategy, and a decrease in support for community-based organizations and community prevention programs.

  18. HIV prevalence among men who have sex with men in Brazil: results of the 2nd national survey using respondent-driven sampling

    PubMed Central

    Kerr, Ligia; Kendall, Carl; Guimarães, Mark Drew Crosland; Salani Mota, Rosa; Veras, Maria Amélia; Dourado, Inês; Maria de Brito, Ana; Merchan-Hamann, Edgar; Pontes, Alexandre Kerr; Leal, Andréa Fachel; Knauth, Daniela; Castro, Ana Rita Coimbra Motta; Macena, Raimunda Hermelinda Maia; Lima, Luana Nepomuceno Costa; Oliveira, Lisangela Cristina; Cavalcantee, Maria do Socorro; Benzaken, Adele Schwartz; Pereira, Gerson; Pimenta, Cristina; Pascom, Ana Roberta Pati; Bermudez, Ximena Pamela Diaz; Moreira, Regina Célia; Brígido, Luis Fernando Macedo; Camillo, Ana Cláudia; McFarland, Willi; Johnston, Lisa G.

    2018-01-01

    Abstract This paper reports human immuno-deficiency virus (HIV) prevalence in the 2nd National Biological and Behavioral Surveillance Survey (BBSS) among men who have sex with men (MSM) in 12 cities in Brazil using respondent-driven sampling (RDS). Following formative research, RDS was applied in 12 cities in the 5 macroregions of Brazil between June and December 2016 to recruit MSM for BBSS. The target sample size was 350 per city. Five to 6 seeds were initially selected to initiate recruitment and coupons and interviews were managed online. On-site rapid testing was used for HIV screening, and confirmed by a 2nd test. Participants were weighted using Gile estimator. Data from all 12 cities were merged and analyzed with Stata 14.0 complex survey data analysis tools in which each city was treated as its own strata. Missing data for those who did not test were imputed HIV+ if they reported testing positive before and were taking antiretroviral therapy. A total of 4176 men were recruited in the 12 cities. The average time to completion was 10.2 weeks. The longest chain length varied from 8 to 21 waves. The sample size was achieved in all but 2 cities. A total of 3958 of the 4176 respondents agreed to test for HIV (90.2%). For results without imputation, 17.5% (95%CI: 14.7–20.7) of our sample was HIV positive. With imputation, 18.4% (95%CI: 15.4–21.7) were seropositive. HIV prevalence increased beyond expectations from the results of the 2009 survey (12.1%; 95%CI: 10.0–14.5) to 18.4%; CI95%: 15.4 to 21.7 in 2016. This increase accompanies Brazil's focus on the treatment to prevention strategy, and a decrease in support for community-based organizations and community prevention programs. PMID:29794604

  19. Performance of biomorphic Silicon Carbide as particulate filter in diesel boilers.

    PubMed

    Orihuela, M Pilar; Gómez-Martín, Aurora; Becerra, José A; Chacartegui, Ricardo; Ramírez-Rico, Joaquín

    2017-12-01

    Biomorphic Silicon Carbide (bioSiC) is a novel porous ceramic material with excellent mechanical and thermal properties. Previous studies have demonstrated that it may be a good candidate for its use as particle filter media of exhaust gases at medium or high temperature. In order to determine the filtration efficiency of biomorphic Silicon Carbide, and its adequacy as substrate for diesel particulate filters, different bioSiC-samples have been tested in the flue gases of a diesel boiler. For this purpose, an experimental facility to extract a fraction of the boiler exhaust flow and filter it under controlled conditions has been designed and built. Several filter samples with different microstructures, obtained from different precursors, have been tested in this bench. The experimental campaign was focused on the measurement of the number and size of particles before and after placing the samples. Results show that the initial efficiency of filters made from natural precursors is severely determined by the cutting direction and associated microstructure. In biomorphic Silicon Carbide derived from radially cut wood, the initial efficiency of the filter is higher than 95%. Nevertheless, when the cut of the wood is axial, the efficiency depends on the pore size and the permeability, reaching in some cases values in the range 70-90%. In this case, the presence of macropores in some of the samples reduces their efficiency as particle traps. In continuous operation, the accumulation of particles within the porous media leads to the formation of a soot cake, which improves the efficiency except in the case when extra-large pores exist. For all the samples, after a few operation cycles, capture efficiency was higher than 95%. These experimental results show the potential for developing filters for diesel boilers based on biomorphic Silicon Carbide. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  1. Microstructural changes in steel 10Kh9V2MFBR during creep for 40000 hours at 600°C

    NASA Astrophysics Data System (ADS)

    Fedoseeva, A. E.; Kozlov, P. A.; Dudko, V. A.; Skorobogatykh, V. N.; Shchenkova, I. A.; Kaibyshev, R. O.

    2015-10-01

    In this work, we have investigated microstructural changes in steel 10Kh9V2MFBR (analog of P02 steel) after long-term creep tests at a temperature of 600°C under an initial stress of 137 MPa. Time to rupture was found to be more than 40000 h. It has been established that, in the zone of grips and in the neck region of the sample, the size of the particles of the M 23C6 carbides increases from 85 nm to 152 nm and 182 nm, respectively. In addition, large particles of the Laves phase with an average size of 295 nm are separated. The particles of these phases are located along high-angle boundaries. During prolonged aging and creep, the transformation of the M(C,N) particles enriched in V into the Z phase occurs. The average size of particles of the Z phase after prolonged ageing was 48 nm; after creep, it reached 97 nm. The size of M(C,N) particles enriched by Nb increases from 26 nm after tempering to 55 nm after prolonged aging and creep. It has been established that, in spite of an increase in the transverse size of the laths of tempered martensite from 0.4 to 0.9 µm in the neck of the sample, the misorientation of the lath boundaries does not increase. No recrystallization processes were found to develop in the steel during creep.

  2. Exploiting Size-Dependent Drag and Magnetic Forces for Size-Specific Separation of Magnetic Nanoparticles

    PubMed Central

    Rogers, Hunter B.; Anani, Tareq; Choi, Young Suk; Beyers, Ronald J.; David, Allan E.

    2015-01-01

    Realizing the full potential of magnetic nanoparticles (MNPs) in nanomedicine requires the optimization of their physical and chemical properties. Elucidation of the effects of these properties on clinical diagnostic or therapeutic properties, however, requires the synthesis or purification of homogenous samples, which has proved to be difficult. While initial simulations indicated that size-selective separation could be achieved by flowing magnetic nanoparticles through a magnetic field, subsequent in vitro experiments were unable to reproduce the predicted results. Magnetic field-flow fractionation, however, was found to be an effective method for the separation of polydisperse suspensions of iron oxide nanoparticles with diameters greater than 20 nm. While similar methods have been used to separate magnetic nanoparticles before, no previous work has been done with magnetic nanoparticles between 20 and 200 nm. Both transmission electron microscopy (TEM) and dynamic light scattering (DLS) analysis were used to confirm the size of the MNPs. Further development of this work could lead to MNPs with the narrow size distributions necessary for their in vitro and in vivo optimization. PMID:26307980

  3. The fracture strength of cryomilled 99.7 Al nanopowders consolidated by high frequency induction sintering

    NASA Astrophysics Data System (ADS)

    El-Danaf, Ehab A.; Baig, Muneer; Almajid, Abdulhakim A.; Soliman, Mahmoud S.

    2014-08-01

    Mechanical Attrition of metallic powders induces severe plastic deformation and consequently reduces the average grain size. Powders of 99.7 Al (45μm particle size), cryomilled for 7 hrs having a crystal size of ~ 20 nm, were consolidated by high frequency induction sintering under a constant pressure of 50 MPa and at two temperatures of 500 and 550 °C for two sintering dwell times of 1 and 3 minutes at a constant heating rate of 400 °C/min. The bright field TEM image and X-ray line broadening technique, for the cryomilled powders, were used to measure-the crystallite size. Simple compression at an initial strain rate of 10-4 s-1 was conducted at room temperature, 373 and 473 K, and the yield strength was documented and correlated with the sintering parameters. The as-received 99.7 Al powders-consolidated using one of the sintering parameters was used as a reference material to compare the mechanical properties. Hardness, density and crystal size of the consolidated sample, that gave the highest yield and fracture strength, were measured.

  4. Effect of electromagnetic interaction during fusion welding of AISI 2205 duplex stainless steel on the corrosion resistance

    NASA Astrophysics Data System (ADS)

    García-Rentería, M. A.; López-Morelos, V. H.; González-Sánchez, J.; García-Hernández, R.; Dzib-Pérez, L.; Curiel-López, F. F.

    2017-02-01

    The effect of electromagnetic interaction of low intensity (EMILI) applied during fusion welding of AISI 2205 duplex stainless steel on the resistance to localised corrosion in natural seawater was investigated. The heat affected zone (HAZ) of samples welded under EMILI showed a higher temperature for pitting initiation and lower dissolution under anodic polarisation in chloride containing solutions than samples welded without EMILI. The EMILI assisted welding process developed in the present work enhanced the resistance to localised corrosion due to a modification on the microstructural evolution in the HAZ and the fusion zone during the thermal cycle involved in fusion welding. The application of EMILI reduced the size of the HAZ, limited coarsening of the ferrite grains and promoted regeneration of austenite in this zone, inducing a homogeneous passive condition of the surface. EMILI can be applied during fusion welding of structural or functional components of diverse size manufactured with duplex stainless steel designed to withstand aggressive environments such as natural seawater or marine atmospheres.

  5. Kinetic studies on the reduction of iron ore nuggets by devolatilization of lean-grade coal

    NASA Astrophysics Data System (ADS)

    Biswas, Chanchal; Gupta, Prithviraj; De, Arnab; Chaudhuri, Mahua Ghosh; Dey, Rajib

    2016-12-01

    An isothermal kinetic study of a novel technique for reducing agglomerated iron ore by volatiles released by pyrolysis of lean-grade non-coking coal was carried out at temperature from 1050 to 1200°C for 10-120 min. The reduced samples were characterized by scanning electron microscopy, energy-dispersive X-ray spectroscopy, and chemical analysis. A good degree of metallization and reduction was achieved. Gas diffusion through the solid was identified as the reaction-rate-controlling resistance; however, during the initial period, particularly at lower temperatures, resistance to interfacial chemical reaction was also significant, though not dominant. The apparent rate constant was observed to increase marginally with decreasing size of the particles constituting the nuggets. The apparent activation energy of reduction was estimated to be in the range from 49.640 to 51.220 kJ/mol and was not observed to be affected by the particle size. The sulfur and carbon contents in the reduced samples were also determined.

  6. Sexual Orientation Identity Disparities in Awareness and Initiation of the Human Papillomavirus Vaccine Among U.S. Women and Girls: A National Survey.

    PubMed

    Agénor, Madina; Peitzmeier, Sarah; Gordon, Allegra R; Haneuse, Sebastien; Potter, Jennifer E; Austin, S Bryn

    2015-07-21

    Lesbians and bisexual women are at risk for human papillomavirus (HPV) infection from female and male sexual partners. To examine the association between sexual orientation identity and HPV vaccination among U.S. women and girls. Cross-sectional, using 2006-2010 National Survey of Family Growth data. U.S. civilian noninstitutionalized population. The 2006-2010 National Survey of Family Growth used stratified cluster sampling to establish a national probability sample of 12,279 U.S. women and girls aged 15 to 44 years. Analyses were restricted to 3253 women and girls aged 15 to 25 years who were asked about HPV vaccination. Multivariable logistic regression was used to obtain prevalence estimates of HPV vaccine awareness and initiation adjusted for sociodemographic and health care factors for each sexual orientation identity group. Among U.S. women and girls aged 15 to 25 years, 84.4% reported having heard of the HPV vaccine; of these, 28.5% had initiated HPV vaccination. The adjusted prevalence of vaccine awareness was similar among heterosexual, bisexual, and lesbian respondents. After adjustment for covariates, 8.5% (P = 0.007) of lesbians and 33.2% (P = 0.33) of bisexual women and girls who had heard of the vaccine had initiated vaccination compared with 28.4% of their heterosexual counterparts. Self-reported, cross-sectional data, and findings may not be generalizable to periods after 2006 to 2010 or all U.S. lesbians aged 15 to 25 years (because of the small sample size for this group). Adolescent and young adult lesbians may be less likely to initiate HPV vaccination than their heterosexual counterparts. Programs should facilitate access to HPV vaccination services among young lesbians. National Cancer Institute.

  7. Agile convolutional neural network for pulmonary nodule classification using CT images.

    PubMed

    Zhao, Xinzhuo; Liu, Liyao; Qi, Shouliang; Teng, Yueyang; Li, Jianhua; Qian, Wei

    2018-04-01

    To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.

  8. Sediment laboratory quality-assurance project: studies of methods and materials

    USGS Publications Warehouse

    Gordon, J.D.; Newland, C.A.; Gray, J.R.

    2001-01-01

    In August 1996 the U.S. Geological Survey initiated the Sediment Laboratory Quality-Assurance project. The Sediment Laboratory Quality Assurance project is part of the National Sediment Laboratory Quality-Assurance program. This paper addresses the fmdings of the sand/fme separation analysis completed for the single-blind reference sediment-sample project and differences in reported results between two different analytical procedures. From the results it is evident that an incomplete separation of fme- and sand-size material commonly occurs resulting in the classification of some of the fme-size material as sand-size material. Electron microscopy analysis supported the hypothesis that the negative bias for fme-size material and the positive bias for sand-size material is largely due to aggregation of some of the fine-size material into sand-size particles and adherence of fine-size material to the sand-size grains. Electron microscopy analysis showed that preserved river water, which was low in dissolved solids, specific conductance, and neutral pH, showed less aggregation and adhesion than preserved river water that was higher in dissolved solids and specific conductance with a basic pH. Bacteria were also found growing in the matrix, which may enhance fme-size material aggregation through their adhesive properties. Differences between sediment-analysis methods were also investigated as pan of this study. Suspended-sediment concentration results obtained from one participating laboratory that used a total-suspended solids (TSS) method had greater variability and larger negative biases than results obtained when this laboratory used a suspended-sediment concentration method. When TSS methods were used to analyze the reference samples, the median suspended sediment concentration percent difference was -18.04 percent. When the laboratory used a suspended-sediment concentration method, the median suspended-sediment concentration percent difference was -2.74 percent. The percent difference was calculated as follows: Percent difference = (( reported mass - known mass)/known mass ) X 100.

  9. Study on Sumbawa gold ore liberation using rod mill: effect of rod-number and rotational speed on particle size distribution

    NASA Astrophysics Data System (ADS)

    Prasetya, A.; Mawadati, A.; Putri, A. M. R.; Petrus, H. T. B. M.

    2018-01-01

    Comminution is one of crucial steps in gold ore processing used to liberate the valuable minerals from gaunge mineral. This research is done to find the particle size distribution of gold ore after it has been treated through the comminution process in a rod mill with various number of rod and rotational speed that will results in one optimum milling condition. For the initial step, Sumbawa gold ore was crushed and then sieved to pass the 2.5 mesh and retained on the 5 mesh (this condition was taken to mimic real application in artisanal gold mining). Inserting the prepared sample into the rod mill, the observation on effect of rod-number and rotational speed was then conducted by variating the rod number of 7 and 10 while the rotational speed was varied from 60, 85, and 110 rpm. In order to be able to provide estimation on particle distribution of every condition, the comminution kinetic was applied by taking sample at 15, 30, 60, and 120 minutes for size distribution analysis. The change of particle distribution of top and bottom product as time series was then treated using Rosin-Rammler distribution equation. The result shows that the homogenity of particle size and particle size distribution is affected by rod-number and rotational speed. The particle size distribution is more homogeneous by increasing of milling time, regardless of rod-number and rotational speed. Mean size of particles do not change significantly after 60 minutes milling time. Experimental results showed that the optimum condition was achieved at rotational speed of 85 rpm, using rod-number of 7.

  10. GIS-NaP1 zeolite microspheres as potential water adsorption material: Influence of initial silica concentration on adsorptive and physical/topological properties

    PubMed Central

    Sharma, Pankaj; Song, Ju-Sub; Han, Moon Hee; Cho, Churl-Hee

    2016-01-01

    GIS-NaP1 zeolite samples were synthesized using seven different Si/Al ratios (5–11) of the hydrothermal reaction mixtures having chemical composition Al2O3:xSiO2:14Na2O:840H2O to study the impact of Si/Al molar ratio on the water vapour adsorption potential, phase purity, morphology and crystal size of as-synthesized GIS-NaP1 zeolite crystals. The X-ray diffraction (XRD) observations reveal that Si/Al ratio does not affect the phase purity of GIS-NaP1 zeolite samples as high purity GIS-NaP1 zeolite crystals were obtained from all Si/Al ratios. Contrary, Si/Al ratios have remarkable effect on the morphology, crystal size and porosity of GIS-NaP1 zeolite microspheres. Transmission electron microscopy (TEM) evaluations of individual GIS-NaP1 zeolite microsphere demonstrate the characteristic changes in the packaging/arrangement, shape and size of primary nano crystallites. Textural characterisation using water vapour adsorption/desorption, and nitrogen adsorption/desorption data of as-synthesized GIS-NaP1 zeolite predicts the existence of mix-pores i.e., microporous as well as mesoporous character. High water storage capacity 1727.5 cm3 g−1 (138.9 wt.%) has been found for as-synthesized GIS-NaP1 zeolite microsphere samples during water vapour adsorption studies. Further, the total water adsorption capacity values for P6 (1299.4 mg g−1) and P7 (1388.8 mg g−1) samples reveal that these two particular samples can absorb even more water than their own weights. PMID:26964638

  11. GIS-NaP1 zeolite microspheres as potential water adsorption material: Influence of initial silica concentration on adsorptive and physical/topological properties.

    PubMed

    Sharma, Pankaj; Song, Ju-Sub; Han, Moon Hee; Cho, Churl-Hee

    2016-03-11

    GIS-NaP1 zeolite samples were synthesized using seven different Si/Al ratios (5-11) of the hydrothermal reaction mixtures having chemical composition Al2O3:xSiO2:14Na2O:840H2O to study the impact of Si/Al molar ratio on the water vapour adsorption potential, phase purity, morphology and crystal size of as-synthesized GIS-NaP1 zeolite crystals. The X-ray diffraction (XRD) observations reveal that Si/Al ratio does not affect the phase purity of GIS-NaP1 zeolite samples as high purity GIS-NaP1 zeolite crystals were obtained from all Si/Al ratios. Contrary, Si/Al ratios have remarkable effect on the morphology, crystal size and porosity of GIS-NaP1 zeolite microspheres. Transmission electron microscopy (TEM) evaluations of individual GIS-NaP1 zeolite microsphere demonstrate the characteristic changes in the packaging/arrangement, shape and size of primary nano crystallites. Textural characterisation using water vapour adsorption/desorption, and nitrogen adsorption/desorption data of as-synthesized GIS-NaP1 zeolite predicts the existence of mix-pores i.e., microporous as well as mesoporous character. High water storage capacity 1727.5 cm(3) g(-1) (138.9 wt.%) has been found for as-synthesized GIS-NaP1 zeolite microsphere samples during water vapour adsorption studies. Further, the total water adsorption capacity values for P6 (1299.4 mg g(-1)) and P7 (1388.8 mg g(-1)) samples reveal that these two particular samples can absorb even more water than their own weights.

  12. Airframe integrity based on Bayesian approach

    NASA Astrophysics Data System (ADS)

    Hurtado Cahuao, Jose Luis

    Aircraft aging has become an immense challenge in terms of ensuring the safety of the fleet while controlling life cycle costs. One of the major concerns in aircraft structures is the development of fatigue cracks in the fastener holes. A probabilistic-based method has been proposed to manage this problem. In this research, the Bayes' theorem is used to assess airframe integrity by updating generic data with airframe inspection data while such data are compiled. This research discusses the methodology developed for assessment of loss of airframe integrity due to fatigue cracking in the fastener holes of an aging platform. The methodology requires a probability density function (pdf) at the end of SAFE life. Subsequently, a crack growth regime begins. As the Bayesian analysis requires information of a prior initial crack size pdf, such a pdf is assumed and verified to be lognormally distributed. The prior distribution of crack size as cracks grow is modeled through a combined Inverse Power Law (IPL) model and lognormal relationships. The first set of inspections is used as the evidence for updating the crack size distribution at the various stages of aircraft life. Moreover, the materials used in the structural part of the aircrafts have variations in their properties due to their calibration errors and machine alignment. A Matlab routine (PCGROW) is developed to calculate the crack distribution growth through three different crack growth models. As the first step, the material properties and the initial crack size are sampled. A standard Monte Carlo simulation is employed for this sampling process. At the corresponding aircraft age, the crack observed during the inspections, is used to update the crack size distribution and proceed in time. After the updating, it is possible to estimate the probability of structural failure as a function of flight hours for a given aircraft in the future. The results show very accurate and useful values related to the reliability and integrity of airframes in aging aircrafts. Inspection data shown in this dissertation are not the actual data from known aircrafts and are only used to demonstrate the methodologies.

  13. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  14. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  15. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  16. Optimization of crystallization conditions for biological macromolecules.

    PubMed

    McPherson, Alexander; Cudney, Bob

    2014-11-01

    For the successful X-ray structure determination of macromolecules, it is first necessary to identify, usually by matrix screening, conditions that yield some sort of crystals. Initial crystals are frequently microcrystals or clusters, and often have unfavorable morphologies or yield poor diffraction intensities. It is therefore generally necessary to improve upon these initial conditions in order to obtain better crystals of sufficient quality for X-ray data collection. Even when the initial samples are suitable, often marginally, refinement of conditions is recommended in order to obtain the highest quality crystals that can be grown. The quality of an X-ray structure determination is directly correlated with the size and the perfection of the crystalline samples; thus, refinement of conditions should always be a primary component of crystal growth. The improvement process is referred to as optimization, and it entails sequential, incremental changes in the chemical parameters that influence crystallization, such as pH, ionic strength and precipitant concentration, as well as physical parameters such as temperature, sample volume and overall methodology. It also includes the application of some unique procedures and approaches, and the addition of novel components such as detergents, ligands or other small molecules that may enhance nucleation or crystal development. Here, an attempt is made to provide guidance on how optimization might best be applied to crystal-growth problems, and what parameters and factors might most profitably be explored to accelerate and achieve success.

  17. Optimization of crystallization conditions for biological macromolecules

    PubMed Central

    McPherson, Alexander; Cudney, Bob

    2014-01-01

    For the successful X-ray structure determination of macromolecules, it is first necessary to identify, usually by matrix screening, conditions that yield some sort of crystals. Initial crystals are frequently microcrystals or clusters, and often have unfavorable morphologies or yield poor diffraction intensities. It is therefore generally necessary to improve upon these initial conditions in order to obtain better crystals of sufficient quality for X-ray data collection. Even when the initial samples are suitable, often marginally, refinement of conditions is recommended in order to obtain the highest quality crystals that can be grown. The quality of an X-ray structure determination is directly correlated with the size and the perfection of the crystalline samples; thus, refinement of conditions should always be a primary component of crystal growth. The improvement process is referred to as optimization, and it entails sequential, incremental changes in the chemical parameters that influence crystallization, such as pH, ionic strength and precipitant concentration, as well as physical parameters such as temperature, sample volume and overall methodology. It also includes the application of some unique procedures and approaches, and the addition of novel components such as detergents, ligands or other small molecules that may enhance nucleation or crystal development. Here, an attempt is made to provide guidance on how optimization might best be applied to crystal-growth problems, and what parameters and factors might most profitably be explored to accelerate and achieve success. PMID:25372810

  18. Defining acute aortic syndrome after trauma: Are Abbreviated Injury Scale codes a useful surrogate descriptor?

    PubMed

    Leach, R; McNally, Donal; Bashir, Mohamad; Sastry, Priya; Cuerden, Richard; Richens, David; Field, Mark

    2012-10-01

    The severity and location of injuries resulting from vehicular collisions are normally recorded in Abbreviated Injury Scale (AIS) code; we propose a system to link AIS code to a description of acute aortic syndrome (AAS), thus allowing the hypothesis that aortic injury is progressive with collision kinematics to be tested. Standard AIS codes were matched with a clinical description of AAS. A total of 199 collisions that resulted in aortic injury were extracted from a national automotive collision database and the outcomes mapped onto AAS descriptions. The severity of aortic injury (AIS severity score) and stage of AAS progression were compared with collision kinematics and occupant demographics. Post hoc power analyses were used to estimate maximum effect size. The general demographic distribution of the sample represented that of the UK population in regard to sex and age. No significant relationship was observed between estimated test speed, collision direction, occupant location or seat belt use and clinical progression of aortic injury (once initiated). Power analysis confirmed that a suitable sample size was used to observe a medium effect in most of the cases. Similarly, no association was observed between injury severity and collision kinematics. There is sufficient information on AIS severity and location codes to map onto the clinical AAS spectrum. It was not possible, with this data set, to consider the influence of collision kinematics on aortic injury initiation. However, it was demonstrated that after initiation, further progression along the AAS pathway was not influenced by collision kinematics. This might be because the injury is not progressive, because the vehicle kinematics studied do not fully represent the kinematics of the occupants, or because an unknown factor, such as stage of cardiac cycle, dominates. Epidemiologic/prognostic study, level IV.

  19. An internal pilot study for a randomized trial aimed at evaluating the effectiveness of iron interventions in children with non-anemic iron deficiency: the OptEC trial.

    PubMed

    Abdullah, Kawsari; Thorpe, Kevin E; Mamak, Eva; Maguire, Jonathon L; Birken, Catherine S; Fehlings, Darcy; Hanley, Anthony J; Macarthur, Colin; Zlotkin, Stanley H; Parkin, Patricia C

    2015-07-14

    The OptEC trial aims to evaluate the effectiveness of oral iron in young children with non-anemic iron deficiency (NAID). The initial sample size calculated for the OptEC trial ranged from 112-198 subjects. Given the uncertainty regarding the parameters used to calculate the sample, an internal pilot study was conducted. The objectives of this internal pilot study were to obtain reliable estimate of parameters (standard deviation and design factor) to recalculate the sample size and to assess the adherence rate and reasons for non-adherence in children enrolled in the pilot study. The first 30 subjects enrolled into the OptEC trial constituted the internal pilot study. The primary outcome of the OptEC trial is the Early Learning Composite (ELC). For estimation of the SD of the ELC, descriptive statistics of the 4 month follow-up ELC scores were assessed within each intervention group. The observed SD within each group was then pooled to obtain an estimated SD (S2) of the ELC. Correlation (ρ) between the ELC measured at baseline and follow-up was assessed. Recalculation of the sample size was performed using analysis of covariance (ANCOVA) method which uses the design factor (1- ρ(2)). Adherence rate was calculated using a parent reported rate of missed doses of the study intervention. The new estimate of the SD of the ELC was found to be 17.40 (S2). The design factor was (1- ρ2) = 0.21. Using a significance level of 5%, power of 80%, S2 = 17.40 and effect estimate (Δ) ranging from 6-8 points, the new sample size based on ANCOVA method ranged from 32-56 subjects (16-28 per group). Adherence ranged between 14% and 100% with 44% of the children having an adherence rate ≥ 86%. Information generated from our internal pilot study was used to update the design of the full and definitive trial, including recalculation of sample size, determination of the adequacy of adherence, and application of strategies to improve adherence. ClinicalTrials.gov Identifier: NCT01481766 (date of registration: November 22, 2011).

  20. Temporal analysis of genetic structure to assess population dynamics of reintroduced swift foxes.

    PubMed

    Cullingham, Catherine I; Moehrenschlager, Axel

    2013-12-01

    Reintroductions are increasingly used to reestablish species, but a paucity of long-term postrelease monitoring has limited understanding of whether and when viable populations subsequently persist. We conducted temporal genetic analyses of reintroduced populations of swift foxes (Vulpes velox) in Canada (Alberta and Saskatchewan) and the United States (Montana). We used samples collected 4 years apart, 17 years from the initiation of the reintroduction, and 3 years after the conclusion of releases. To assess program success, we genotyped 304 hair samples, subsampled from the known range in 2000 and 2001, and 2005 and 2006, at 7 microsatellite loci. We compared diversity, effective population size, and genetic connectivity over time in each population. Diversity remained stable over time and there was evidence of increasing effective population size. We determined population structure in both periods after correcting for differences in sample sizes. The geographic distribution of these populations roughly corresponded with the original release locations, which suggests the release sites had residual effects on the population structure. However, given that both reintroduction sites had similar source populations, habitat fragmentation, due to cropland, may be associated with the population structure we found. Although our results indicate growing, stable populations, future connectivity analyses are warranted to ensure both populations are not subject to negative small-population effects. Our results demonstrate the importance of multiple sampling years to fully capture population dynamics of reintroduced populations. Análisis Temporal de la Estructura Genética para Evaluar la Dinámica Poblacional de Zorros (Vulpes velox) Reintroducidos. © 2013 Society for Conservation Biology.

  1. Lot quality assurance sampling (LQAS) for monitoring a leprosy elimination program.

    PubMed

    Gupte, M D; Narasimhamurthy, B

    1999-06-01

    In a statistical sense, prevalences of leprosy in different geographical areas can be called very low or rare. Conventional survey methods to monitor leprosy control programs, therefore, need large sample sizes, are expensive, and are time-consuming. Further, with the lowering of prevalence to the near-desired target level, 1 case per 10,000 population at national or subnational levels, the program administrator's concern will be shifted to smaller areas, e.g., districts, for assessment and, if needed, for necessary interventions. In this paper, Lot Quality Assurance Sampling (LQAS), a quality control tool in industry, is proposed to identify districts/regions having a prevalence of leprosy at or above a certain target level, e.g., 1 in 10,000. This technique can also be considered for identifying districts/regions at or below the target level of 1 per 10,000, i.e., areas where the elimination level is attained. For simulating various situations and strategies, a hypothetical computerized population of 10 million persons was created. This population mimics the actual population in terms of the empirical information on rural/urban distributions and the distribution of households by size for the state of Tamil Nadu, India. Various levels with respect to leprosy prevalence are created using this population. The distribution of the number of cases in the population was expected to follow the Poisson process, and this was also confirmed by examination. Sample sizes and corresponding critical values were computed using Poisson approximation. Initially, villages/towns are selected from the population and from each selected village/town households are selected using systematic sampling. Households instead of individuals are used as sampling units. This sampling procedure was simulated 1000 times in the computer from the base population. The results in four different prevalence situations meet the required limits of Type I error of 5% and 90% Power. It is concluded that after validation under field conditions, this method can be considered for a rapid assessment of the leprosy situation.

  2. Nonstoichiometry and phase stability of Al and Cr substituted Mg ferrite nanoparticles synthesized by citrate method

    NASA Astrophysics Data System (ADS)

    Ateia, Ebtesam. E.; Mohamed, Amira. T.

    2017-03-01

    The spinel ferrite Mg0.7Cr0.3Fe2O4, and Mg0.7Al0.3Fe2O4 were prepared by the citrate technique. All samples were characterized by X-ray diffraction (XRD), Field Emission Scanning Electron Microscopy (FESEM), High Resolution Transmission Electron Micrographs (HRTEM), Energy Dispersive X ray Spectroscopy (EDAX) and Atomic Force Microscope (AFM). XRD confirmed the formation of cubic spinel structure of the investigated samples. The average crystallite sizes were found to be between 24.7 and 27.5 nm for Al3+ and Mg2+ respectively. The substitution of Cr3+/Al3+ in place of Mg2+ ion initiates a crystalline anisotropy due to large size mismatch between Cr /Al and Mg2+, which creates strain inside the crystal volume. According to VSM results, by adding Al3+ or Cr3+ ions at the expense of Mg2+, the saturation magnetization increased. The narrow hysteresis loop of the samples indicates that the amount of dissipated energy is small, which is desirable for soft magnetic applications. Magnetic dynamics of the samples were studied by measuring magnetic susceptibility versus temperature at different magnetic fields. The band gap energy, which was calculated from near infrared (NIR) and visible (VIS) reflectance spectra using the Kubelka-Munk function, decreases with increasing the particle size. Furthermore, the band gaps were quite narrow (1.5-1.7 eV), hence the investigated samples could act as visible light driven photo catalysts. To sum up the addition of trivalent Al3+, and Cr3+ ions enhanced the optical, magnetic and structure properties of the samples. Mg0.7 Cr0.3Fe2O4 sample will be a better candidate for the optical applications and will also be a guaranteeing hopeful for technological applications.

  3. Competitive Deep-Belief Networks for Underwater Acoustic Target Recognition

    PubMed Central

    Shen, Sheng; Yao, Xiaohui; Sheng, Meiping; Wang, Chen

    2018-01-01

    Underwater acoustic target recognition based on ship-radiated noise belongs to the small-sample-size recognition problems. A competitive deep-belief network is proposed to learn features with more discriminative information from labeled and unlabeled samples. The proposed model consists of four stages: (1) A standard restricted Boltzmann machine is pretrained using a large number of unlabeled data to initialize its parameters; (2) the hidden units are grouped according to categories, which provides an initial clustering model for competitive learning; (3) competitive training and back-propagation algorithms are used to update the parameters to accomplish the task of clustering; (4) by applying layer-wise training and supervised fine-tuning, a deep neural network is built to obtain features. Experimental results show that the proposed method can achieve classification accuracy of 90.89%, which is 8.95% higher than the accuracy obtained by the compared methods. In addition, the highest accuracy of our method is obtained with fewer features than other methods. PMID:29570642

  4. Model of unplanned smoking initiation of children and adolescents: an integrated stage model of smoking behavior.

    PubMed

    Kremers, S P J; Mudde, A N; De Vries, H

    2004-05-01

    Two lines of psychological research have attempted to spell out the stages of adolescent smoking initiation. The first has focused on behavioral stages of smoking initiation, while the second line emphasized motivational stages. A large international sample of European adolescents (N = 10,170, mean age = 13.3 years) was followed longitudinally. Self-reported motivational and behavioral stages of smoking initiation were integrated, leading to the development of the Model of Unplanned Smoking Initiation of Children and Adolescents (MUSICA). The MUSICA postulates that youngsters experiment with smoking while they are in an unmotivated state as regards their plans for smoking regularly in the future. More than 95% of the total population resided in one of the seven stages distinguished by MUSICA. The probability of starting to smoke regularly during the 12 months follow-up period increased with advanced stage assignment at baseline. Unique social cognitive predictors of stage progression from the various stages were identified, but effect sizes of predictors of transitions were small. The integration of motivational and behavioral dimensions improves our understanding of the process of smoking initiation. In contrast to current theories of smoking initiation, adolescent uptake of smoking behavior was found to be an unplanned action.

  5. Effect size calculation in meta-analyses of psychotherapy outcome research.

    PubMed

    Hoyt, William T; Del Re, A C

    2018-05-01

    Meta-analysis of psychotherapy intervention research normally examines differences between treatment groups and some form of comparison group (e.g., wait list control; alternative treatment group). The effect of treatment is normally quantified as a standardized mean difference (SMD). We describe procedures for computing unbiased estimates of the population SMD from sample data (e.g., group Ms and SDs), and provide guidance about a number of complications that may arise related to effect size computation. These complications include (a) incomplete data in research reports; (b) use of baseline data in computing SMDs and estimating the population standard deviation (σ); (c) combining effect size data from studies using different research designs; and (d) appropriate techniques for analysis of data from studies providing multiple estimates of the effect of interest (i.e., dependent effect sizes). Clinical or Methodological Significance of this article: Meta-analysis is a set of techniques for producing valid summaries of existing research. The initial computational step for meta-analyses of research on intervention outcomes involves computing an effect size quantifying the change attributable to the intervention. We discuss common issues in the computation of effect sizes and provide recommended procedures to address them.

  6. The role of reduced oxygen in the developmental physiology of growth and metamorphosis initiation in Drosophila

    USDA-ARS?s Scientific Manuscript database

    Rearing oxygen level is known to affect final body size in a variety of insects, but the physiological mechanisms by which oxygen affects size are incompletely understood. In Manduca and Drosophila, the larval size at which metamorphosis is initiated largely determines adult size, and metamorphosis ...

  7. High pressure inertial focusing for separating and concentrating bacteria at high throughput

    NASA Astrophysics Data System (ADS)

    Cruz, J.; Hooshmand Zadeh, S.; Graells, T.; Andersson, M.; Malmström, J.; Wu, Z. G.; Hjort, K.

    2017-08-01

    Inertial focusing is a promising microfluidic technology for concentration and separation of particles by size. However, there is a strong correlation of increased pressure with decreased particle size. Theory and experimental results for larger particles were used to scale down the phenomenon and find the conditions that focus 1 µm particles. High pressure experiments in robust glass chips were used to demonstrate the alignment. We show how the technique works for 1 µm spherical polystyrene particles and for Escherichia coli, not being harmful for the bacteria at 50 µl min-1. The potential to focus bacteria, simplicity of use and high throughput make this technology interesting for healthcare applications, where concentration and purification of a sample may be required as an initial step.

  8. Large grain instruction and phonological awareness skill influence rime sensitivity, processing speed, and early decoding skill in adult L2 learners

    PubMed Central

    Brennan, Christine; Booth, James R.

    2016-01-01

    Linguistic knowledge, cognitive ability, and instruction influence how adults acquire a second orthography yet it remains unclear how different forms of instruction influence grain size sensitivity and subsequent decoding skill and speed. Thirty-seven monolingual, literate English-speaking adults were trained on a novel artificial orthography given initial instruction that directed attention to either large or small grain size units (i.e., words or letters). We examined how initial instruction influenced processing speed (i.e., reaction time (RT)) and sensitivity to different orthographic grain sizes (i.e., rimes and letters). Directing attention to large grain size units during initial instruction resulted in higher accuracy for rimes, whereas directing attention to smaller grain size units resulted in slower RTs across all measures. Additionally, phonological awareness skill modulated early learning effects, compensating for the limitations of the initial instruction provided. Collectively, these findings suggest that when adults are learning to read a second orthography, consideration should be given to how initial instruction directs attention to different grain sizes and inherent phonological awareness ability. PMID:27829705

  9. Experimental Investigation of Shock Initiation in Mixtures of Manganese and Sulfur

    NASA Astrophysics Data System (ADS)

    Jetté, F. X.; Goroshin, S.; Higgins, A. J.

    2009-12-01

    Equimolar mixtures of manganese powder and sulfur at different starting densities were tested in two different types of steel recovery capsules in order to study the shock initiation phenomenon in Self-Propagating High-Temperature Synthesis (SHS) mixtures. Two different sizes of Mn particles were used for these experiments, <10 μm and -325 mesh (<44 μm). This mixture was selected due to the large exothermic heat release of the manganese-sulfur reaction (214 kJ/mol), which causes the reaction to be self-sustaining once initiated. The test samples were placed in planar recovery capsules and a strong shock was delivered via the detonation of a charge of amine-sensitized nitromethane. Various shock strengths were achieved by placing different thicknesses of PMMA attenuator discs between the explosive charge and the capsule. The results confirmed that shock-induced reactions can be produced in highly non-porous mixtures. It was also found that shock interactions with the side walls of the recovery capsule can play a significant role in the initiation.

  10. Temperature dependent surface and spectral modifications of nano V2O5 films

    NASA Astrophysics Data System (ADS)

    Manthrammel, M. Aslam; Fatehmulla, A.; Al-Dhafiri, A. M.; Alshammari, A. S.; Khan, Aslam

    2017-03-01

    Nanocrystalline V2O5 films have been deposited on glass substrates at 300°C substrate temperature using thermal evaporation technique and were subjected to thermal annealing at different temperatures 350, 400, and 550°C. X-ray diffraction (XRD) spectra exhibit sharper and broader characteristic peaks respectively indicating the rearrangement of nanocrystallite phases with annealing temperatures. Other phases of vanadium oxides started emerging with the rise in annealing temperature and the sample converted completely to VO2 (B) phase at 550°C annealing. FESEM images showed an increase in crystallite size with 350 and 400°C annealing temperatures followed by a decrease in crystallite size for the sample annealed at 550°C. Transmission spectra showed an initial redshift of the fundamental band edge with 350 and 400°C while a blue shift for the sample annealed at 550°C, which was in agreement with XRD and SEM results. The films exhibited smart window properties as well as nanorod growth at specific annealing temperatures. Apart from showing the PL and defect related peaks, PL studies also supported the observations made in the transmission spectra.

  11. Design and Analysis of an Isokinetic Sampling Probe for Submicron Particle Measurements at High Altitude

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.

    2012-01-01

    An isokinetic dilution probe has been designed with the aid of computational fluid dynamics to sample sub-micron particles emitted from aviation combustion sources. The intended operational range includes standard day atmospheric conditions up to 40,000-ft. With dry nitrogen as the diluent, the probe is intended to minimize losses from particle microphysics and transport while rapidly quenching chemical kinetics. Initial results indicate that the Mach number ratio of the aerosol sample and dilution streams in the mixing region is an important factor for successful operation. Flow rate through the probe tip was found to be highly sensitive to the static pressure at the probe exit. Particle losses through the system were estimated to be on the order of 50% with minimal change in the overall particle size distribution apparent. Following design refinement, experimental testing and validation will be conducted in the Particle Aerosol Laboratory, a research facility located at the NASA Glenn Research Center to study the evolution of aviation emissions at lower stratospheric conditions. Particle size distributions and number densities from various combustion sources will be used to better understand particle-phase microphysics, plume chemistry, evolution to cirrus, and environmental impacts of aviation.

  12. The co-evolution of cultures, social network communities, and agent locations in an extension of Axelrod’s model of cultural dissemination

    NASA Astrophysics Data System (ADS)

    Pfau, Jens; Kirley, Michael; Kashima, Yoshihisa

    2013-01-01

    We introduce a variant of the Axelrod model of cultural dissemination in which agents change their physical locations, social links, and cultures. Numerical simulations are used to investigate the evolution of social network communities and the cultural diversity within and between these communities. An analysis of the simulation results shows that an initial peak in the cultural diversity within network communities is evident before agents segregate into a final configuration of culturally homogeneous communities. Larger long-range interaction probabilities facilitate the initial emergence of culturally diverse network communities, which leads to a more pronounced initial peak in cultural diversity within communities. At equilibrium, the number of communities, and hence cultures, increases when the initial cultural diversity increases. However, the number of communities decreases when the lattice size or population density increases. A phase transition between two regimes of initial cultural diversity is evident. For initial diversities below a critical value, a single network community and culture emerges that dominates the population. For initial diversities above the critical value, multiple culturally homogeneous communities emerge. The critical value of initial diversity at which this transition occurs increases with increasing lattice size and population density and generally with increasing absolute population size. We conclude that larger initial diversities promote cultural heterogenization, while larger lattice sizes, population densities, and in fact absolute population sizes promote homogenization.

  13. Smoke Hazards Resulting from the Burning of Shipboard Paints. Part 3.

    DTIC Science & Technology

    1987-09-18

    pyrolysis begins about 2 mins. after the sample is first exposed to the radiant heat flux. A peak mass loss rate of about 0.4 mg/cm2-s occurs after...the higher local surface tempera- tures associated with the flaming combustion of pyrolysis gases issuing from these cracks. Smoke particle size...combustion in room temperature ventilation air, the mean .’ particle diameters vary between 0.7 and 1.1 /Am during the initial stages of pyrolysis and

  14. Clinical efficacy of the wearable cardioverter-defibrillator in acutely terminating episodes of ventricular fibrillation.

    PubMed

    Auricchio, A; Klein, H; Geller, C J; Reek, S; Heilman, M S; Szymkiewicz, S J

    1998-05-15

    The findings of our initial study demonstrate for the first time the ability to terminate induced VT/VF reliably (100% of all episodes) by a single, monophasic 230-J shock delivered by the Wearable Cardioverter-Defibrillator (WCD). Although limited by sample size, our data suggest the WCD could be used as a feasible bridge to definitive implantation of an implantable cardioverter-defibrillator in patients in whom risk stratification for sudden death is not completed.

  15. Synthesis and electrochemical properties of layered Li[Ni 0.333Co 0.333Mn 0.293Al 0.04]O 2- zF z cathode materials prepared by the sol-gel method

    NASA Astrophysics Data System (ADS)

    Liao, Li; Wang, Xianyou; Luo, Xufang; Wang, Ximing; Gamboa, Sergio; Sebastian, P. J.

    The cathode-active materials, layered Li[Ni 0.333Co 0.333Mn 0.293Al 0.04]O 2- zF z (0 ≤ z ≤ 0.1), were synthesized from a sol-gel precursor at 900 °C in air. The influence of Al-F co-substitution on the structural and electrochemical properties of the as-prepared samples was characterized by X-ray diffraction (XRD), scanning electron microscope (SEM) and electrochemical experiments. The results showed that Li[Ni 0.333Co 0.333Mn 0.293Al 0.04]O 2- zF z has a typical hexagonal structure with a single phase, the particle sizes of the samples tended to increase with increasing fluorine content. It has been found that Li[Ni 0.333Co 0.333Mn 0.293Al 0.04]O 1.95F 0.05 showed an improved cathodic behavior and discharge capacity retention compared to the undoped samples in the voltage range of 3.0-4.3 V. The electrodes prepared from Li[Ni 0.333Co 0.333Mn 0.293Al 0.04]O 1.95F 0.05 delivered an initial discharge capacity of 158 mAh -1 g and an initial coulombic efficiency is 91.3%, and the capacity retention at the 20th cycle was 94.9%. Though the F-doped samples had lower initial capacities, they showed better cycle performances compared with F-free samples. Therefore, this is a promising material for a lithium-ion battery.

  16. High diversity, low disparity and small body size in plesiosaurs (Reptilia, Sauropterygia) from the Triassic-Jurassic boundary.

    PubMed

    Benson, Roger B J; Evans, Mark; Druckenmiller, Patrick S

    2012-01-01

    Invasion of the open ocean by tetrapods represents a major evolutionary transition that occurred independently in cetaceans, mosasauroids, chelonioids (sea turtles), ichthyosaurs and plesiosaurs. Plesiosaurian reptiles invaded pelagic ocean environments immediately following the Late Triassic extinctions. This diversification is recorded by three intensively-sampled European fossil faunas, spanning 20 million years (Ma). These provide an unparalleled opportunity to document changes in key macroevolutionary parameters associated with secondary adaptation to pelagic life in tetrapods. A comprehensive assessment focuses on the oldest fauna, from the Blue Lias Formation of Street, and nearby localities, in Somerset, UK (Earliest Jurassic: 200 Ma), identifying three new species representing two small-bodied rhomaleosaurids (Stratesaurus taylori gen et sp. nov.; Avalonnectes arturi gen. et sp. nov) and the most basal plesiosauroid, Eoplesiosaurus antiquior gen. et sp. nov. The initial radiation of plesiosaurs was characterised by high, but short-lived, diversity of an archaic clade, Rhomaleosauridae. Representatives of this initial radiation were replaced by derived, neoplesiosaurian plesiosaurs at small-medium body sizes during a more gradual accumulation of morphological disparity. This gradualistic modality suggests that adaptive radiations within tetrapod subclades are not always characterised by the initially high levels of disparity observed in the Paleozoic origins of major metazoan body plans, or in the origin of tetrapods. High rhomaleosaurid diversity immediately following the Triassic-Jurassic boundary supports the gradual model of Late Triassic extinctions, mostly predating the boundary itself. Increase in both maximum and minimum body length early in plesiosaurian history suggests a driven evolutionary trend. However, Maximum-likelihood models suggest only passive expansion into higher body size categories.

  17. The NIMH Research Domain Criteria Initiative: Background, Issues, and Pragmatics.

    PubMed

    Kozak, Michael J; Cuthbert, Bruce N

    2016-03-01

    This article describes the National Institute of Mental Health's Research Domain Criteria (RDoC) initiative. The description includes background, rationale, goals, and the way the initiative has been developed and organized. The central RDoC concepts are summarized and the current matrix of constructs that have been vetted by workshops of extramural scientists is depicted. A number of theoretical and methodological issues that can arise in connection with the nature of RDoC constructs are highlighted: subjectivism and heterophenomenology, desynchrony and theoretical neutrality among units of analysis, theoretical reductionism, endophenotypes, biomarkers, neural circuits, construct "grain size," and analytic challenges. The importance of linking RDoC constructs to psychiatric clinical problems is discussed. Some pragmatics of incorporating RDoC concepts into applications for NIMH research funding are considered, including sampling design. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  18. The Effect of Porosity on Fatigue of Die Cast AM60

    NASA Astrophysics Data System (ADS)

    Yang, Zhuofei; Kang, Jidong; Wilkinson, David S.

    2016-07-01

    AM60 high-pressure die castings are known to contain significant porosity which can affect fatigue life. We have studied this using samples drawn from prototype AM60 shock towers by conducting strain-controlled fatigue tests accompanied by X-ray computed tomography analysis. The results show that the machined surface is the preferential location for fatigue crack development, with pores close to these surfaces serving as initiation sites. Fatigue life shows a strong inverse correlation with the size of the fatigue-crack-initiating pore. Pore shape and pore orientation also influence the response. A supplemental study on surface roughness shows that porosity is the dominant factor in fatigue. Tomography enables the link between porosity and fatigue crack initiation to be clearly identified. These data are complemented by SEM observations of the fracture surfaces which are generally flat and full of randomly oriented serration patterns but without long-range fatigue striations.

  19. The viability of photovoltaics on the Martian surface

    NASA Technical Reports Server (NTRS)

    Gaier, James R.; Perez-Davis, Marla E.

    1994-01-01

    The viability of photovoltaics (PV) on the Martian surface may be determined by their ability to withstand significant degradation in the Martian environment. Probably the greatest threat is posed by fine dust particles which are continually blown about the surface of the planet. In an effort to determine the extent of the threat, and to investigate some abatement strategies, a series of experiments were conducted in the Martian Surface Wind Tunnel (MARSWIT) at NASA Ames Research Center. The effects of dust composition, particle size, wind velocity, angle of attack, and protective coatings on the transmittance of light through PV coverglass were determined. Both initially clear and initially dusted samples were subjected both to clear winds and simulated dust storms in the MARSWIT. It was found that wind velocity, particle size, and angle of attack are important parameters affecting occlusion of PV surfaces, while dust composition and protective coatings were not. Neither induced turbulence nor direct current biasing up to 200 volts were effective abatement techniques. Abrasion diffused the light impinging on the PV cells, but did not reduce total coverglass transmittance by more than a few percent.

  20. Pore formation during dehydration of a polycrystalline gypsum sample observed and quantified in a time-series synchrotron X-ray micro-tomography experiment

    NASA Astrophysics Data System (ADS)

    Fusseis, F.; Schrank, C.; Liu, J.; Karrech, A.; Llana-Fúnez, S.; Xiao, X.; Regenauer-Lieb, K.

    2012-03-01

    We conducted an in-situ X-ray micro-computed tomography heating experiment at the Advanced Photon Source (USA) to dehydrate an unconfined 2.3 mm diameter cylinder of Volterra Gypsum. We used a purpose-built X-ray transparent furnace to heat the sample to 388 K for a total of 310 min to acquire a three-dimensional time-series tomography dataset comprising nine time steps. The voxel size of 2.2 μm3 proved sufficient to pinpoint reaction initiation and the organization of drainage architecture in space and time. We observed that dehydration commences across a narrow front, which propagates from the margins to the centre of the sample in more than four hours. The advance of this front can be fitted with a square-root function, implying that the initiation of the reaction in the sample can be described as a diffusion process. Novel parallelized computer codes allow quantifying the geometry of the porosity and the drainage architecture from the very large tomographic datasets (20483 voxels) in unprecedented detail. We determined position, volume, shape and orientation of each resolvable pore and tracked these properties over the duration of the experiment. We found that the pore-size distribution follows a power law. Pores tend to be anisotropic but rarely crack-shaped and have a preferred orientation, likely controlled by a pre-existing fabric in the sample. With on-going dehydration, pores coalesce into a single interconnected pore cluster that is connected to the surface of the sample cylinder and provides an effective drainage pathway. Our observations can be summarized in a model in which gypsum is stabilized by thermal expansion stresses and locally increased pore fluid pressures until the dehydration front approaches to within about 100 μm. Then, the internal stresses are released and dehydration happens efficiently, resulting in new pore space. Pressure release, the production of pores and the advance of the front are coupled in a feedback loop.

  1. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  3. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  4. Naltrexone and Cognitive Behavioral Therapy for the Treatment of Alcohol Dependence

    PubMed Central

    Baros, AM; Latham, PK; Anton, RF

    2008-01-01

    Background Sex differences in regards to pharmacotherapy for alcoholism is a topic of concern following publications suggesting naltrexone, one of the longest approved treatments of alcoholism, is not as effective in women as in men. This study was conducted by combining two randomized placebo controlled clinical trials utilizing similar methodologies and personnel in which the data was amalgamated to evaluate sex effects in a reasonable sized sample. Methods 211 alcoholics (57 female; 154 male) were randomized to the naltrexone/CBT or placebo/CBT arm of the two clinical trials analyzed. Baseline variables were examined for differences between sex and treatment groups via analysis of variance (ANOVA) for continuous variable or chi-square test for categorical variables. All initial outcome analysis was conducted under an intent-to-treat analysis plan. Effect sizes for naltrexone over placebo were determined by Cohen’s D (d). Results The effect size of naltrexone over placebo for the following outcome variables was similar in men and women (%days abstinent (PDA) d=0.36, %heavy drinking days (PHDD) d=0.36 and total standard drinks (TSD) d=0.36). Only for men were the differences significant secondary to the larger sample size (PDA p=0.03; PHDD p=0.03; TSD p=0.04). There were a few variables (GGT at wk-12 change from baseline to week-12: men d=0.36, p=0.05; women d=0.20, p=0.45 and drinks per drinking day: men d=0.36, p=0.05; women d=0.28, p=0.34) where the naltrexone effect size for men was greater than women. In women, naltrexone tended to increase continuous abstinent days before a first drink (women d-0.46, p=0.09; men d=0.00, p=0.44). Conclusions The effect size of naltrexone over placebo appeared similar in women and men in our hands suggesting the findings of sex differences in naltrexone response might have to do with sample size and/or endpoint drinking variables rather than any inherent pharmacological or biological differences in response. PMID:18336635

  5. The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education

    ERIC Educational Resources Information Center

    Slavin, Robert; Smith, Dewi

    2009-01-01

    Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…

  6. Effect of the size of the apical enlargement with rotary instruments, single-cone filling, post space preparation with drills, fiber post removal, and root canal filling removal on apical crack initiation and propagation.

    PubMed

    Çapar, İsmail Davut; Uysal, Banu; Ok, Evren; Arslan, Hakan

    2015-02-01

    The purpose of this study was to investigate the incidence of apical crack initiation and propagation in root dentin after several endodontic procedures. Sixty intact mandibular premolars were sectioned perpendicular to the long axis at 1 mm from the apex, and the apical surface was polished. Thirty teeth were left unprepared and served as a control, and the remaining 30 teeth were instrumented with ProTaper Universal instruments (Dentsply Maillefer, Ballaigues, Switzerland) up to size F5. The root canals were filled with the single-cone technique. Gutta-percha was removed with drills of the Rebilda post system (VOCO, Cuxhaven, Germany). Glass fiber-reinforced composite fiber posts were cemented using a dual-cure resin cement. The fiber posts were removed with a drill of the post system. Retreatment was completed after the removal of the gutta-percha. Crack initiation and propagation in the apical surfaces of the samples were examined with a stereomicroscope after each procedure. The absence/presence of cracks was recorded. Logistic regression was performed to analyze statistically the incidence of crack initiation and propagation with each procedure. The initiation of the first crack and crack propagation was associated with F2 and F4 instruments, respectively. The logistic regression analysis revealed that instrumentation and F2 instrument significantly affected apical crack initiation (P < .001). Post space preparation had a significant effect on crack propagation (P = .0004). The other procedures had no significant effects on crack initiation and propagation (P > .05). Rotary nickel-titanium instrumentation had a significant effect on apical crack initiation, and post space preparation with drills had a significant impact on crack propagation. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  7. Biaxial deformation in high purity aluminum

    DOE PAGES

    Livescu, V.; Bingert, J. F.; Liu, C.; ...

    2015-09-25

    The convergence of multiple characterization tools has been applied to investigate the relationship of microstructure on damage evolution in high purity aluminum. The extremely coarse grain size of the disc-shaped sample provided a quasi-two dimensional structure from which the location of surface-measured features could be inferred. In particular, the role of pre-existing defects on damage growth was accessible due to the presence of casting porosity in the aluminum. Micro tomography, electron backscatter diffraction, and digital image correlation were applied to interrogate the sample in three dimensions. Recently micro-bulge testing apparatus was used to deform the pre-characterized disc of aluminum inmore » biaxial tension, and related analysis techniques were applied to map local strain fields. Subsequent post-mortem characterization of the failed sample was performed to correlate structure to damaged regions. We determined that strain localization and associated damage was most strongly correlated with grain boundary intersections and plastic anisotropy gradients between grains. Pre-existing voids played less of an apparent role than was perhaps initially expected. Finally, these combined techniques provide insight to the mechanism of damage initiation, propagation, and failure, along with a test bed for predictive damage models incorporating anisotropic microstructural effects.« less

  8. Influence of porosity on artificial deterioration of marble and limestone by heating

    NASA Astrophysics Data System (ADS)

    Sassoni, Enrico; Franzoni, Elisa

    2014-06-01

    Testing of stone consolidants to be used on-site, as well as research on new consolidating products, requires suitable stone samples, with deteriorated but still uniform and controllable characteristics. Therefore, a new methodology to artificially deteriorate stone samples by heating, exploiting the anisotropic thermal deformation of calcite crystals, has recently been proposed. In this study, the heating effects on a variety of lithotypes was evaluated and the influence of porosity in determining the actual heating effectiveness was specifically investigated. One marble and four limestones, having comparable calcite amounts but very different porosity, were heated at 400 °C for 1 hour. A systematic comparison between porosity, pore size distribution, water absorption, sorptivity and ultrasonic pulse velocity of unheated and heated samples was performed. The results of the study show that the initial stone porosity plays a very important role, as the modifications in microstructural, physical and mechanical properties are way less pronounced for increasing porosity. Heating was thus confirmed as a very promising artificial deterioration method, whose effectiveness in producing alterations that suitably resemble those actually experienced in the field depends on the initial porosity of the stone to be treated.

  9. Optimizing cyanobacteria growth conditions in a sealed environment to enable chemical inhibition tests with volatile chemicals.

    PubMed

    Johnson, Tylor J; Zahler, Jacob D; Baldwin, Emily L; Zhou, Ruanbao; Gibbons, William R

    2016-07-01

    Cyanobacteria are currently being engineered to photosynthetically produce next-generation biofuels and high-value chemicals. Many of these chemicals are highly toxic to cyanobacteria, thus strains with increased tolerance need to be developed. The volatility of these chemicals may necessitate that experiments be conducted in a sealed environment to maintain chemical concentrations. Therefore, carbon sources such as NaHCO3 must be used for supporting cyanobacterial growth instead of CO2 sparging. The primary goal of this study was to determine the optimal initial concentration of NaHCO3 for use in growth trials, as well as if daily supplementation of NaHCO3 would allow for increased growth. The secondary goal was to determine the most accurate method to assess growth of Anabaena sp. PCC 7120 in a sealed environment with low biomass titers and small sample volumes. An initial concentration of 0.5g/L NaHCO3 was found to be optimal for cyanobacteria growth, and fed-batch additions of NaHCO3 marginally improved growth. A separate study determined that a sealed test tube environment is necessary to maintain stable titers of volatile chemicals in solution. This study also showed that a SYTO® 9 fluorescence-based assay for cell viability was superior for monitoring filamentous cyanobacterial growth compared to absorbance, chlorophyll α (chl a) content, and biomass content due to its accuracy, small sampling size (100μL), and high throughput capabilities. Therefore, in future chemical inhibition trials, it is recommended that 0.5g/L NaHCO3 is used as the carbon source, and that culture viability is monitored via the SYTO® 9 fluorescence-based assay that requires minimum sample size. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Phylogenetic effective sample size.

    PubMed

    Bartoszek, Krzysztof

    2016-10-21

    In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Stochastic Sampling in the IMF of Galactic Open Clusters

    NASA Astrophysics Data System (ADS)

    Kay, Christina; Hancock, M.; Canalizo, G.; Smith, B. J.; Giroux, M. L.

    2010-01-01

    We sought observational evidence of the effects of stochastic sampling of the initial mass function by investigating the integrated colors of a sample of Galactic open clusters. In particular we looked for scatter in the integrated (V-K) color as previous research resulted in little scatter in the (U-B) and (B-V) colors. Combining data from WEBDA and 2MASS we determined three different colors for 287 open clusters. Of these clusters, 39 have minimum uncertainties in age and formed a standard set. A plot of the (V-K) color versus age showed much more scatter than the (U-B) versus age. We also divided the sample into two groups based on a lowest luminosity limit which is a function of age and V magnitude. We expected the group of clusters fainter than this limit to show more scatter than the brighter group. Assuming the published ages, we compared the reddening corrected observed colors to those predicted by Starburst99. The presence of stochastic sampling should increase scatter in the distribution of the differences between observed and model colors of the fainter group relative to the brighter group. However, we found that K-S tests cannot rule out that the distribution of color difference for the brighter and fainter sets come from the same parent distribution. This indistinguishabilty may result from uncertainties in the parameters used to define the groups. This result constrains the size of the effects of stochastic sampling of the initial mass function.

  12. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  13. A semisupervised support vector regression method to estimate biophysical parameters from remotely sensed images

    NASA Astrophysics Data System (ADS)

    Castelletti, Davide; Demir, Begüm; Bruzzone, Lorenzo

    2014-10-01

    This paper presents a novel semisupervised learning (SSL) technique defined in the context of ɛ-insensitive support vector regression (SVR) to estimate biophysical parameters from remotely sensed images. The proposed SSL method aims to mitigate the problems of small-sized biased training sets without collecting any additional samples with reference measures. This is achieved on the basis of two consecutive steps. The first step is devoted to inject additional priors information in the learning phase of the SVR in order to adapt the importance of each training sample according to distribution of the unlabeled samples. To this end, a weight is initially associated to each training sample based on a novel strategy that defines higher weights for the samples located in the high density regions of the feature space while giving reduced weights to those that fall into the low density regions of the feature space. Then, in order to exploit different weights for training samples in the learning phase of the SVR, we introduce a weighted SVR (WSVR) algorithm. The second step is devoted to jointly exploit labeled and informative unlabeled samples for further improving the definition of the WSVR learning function. To this end, the most informative unlabeled samples that have an expected accurate target values are initially selected according to a novel strategy that relies on the distribution of the unlabeled samples in the feature space and on the WSVR function estimated at the first step. Then, we introduce a restructured WSVR algorithm that jointly uses labeled and unlabeled samples in the learning phase of the WSVR algorithm and tunes their importance by different values of regularization parameters. Experimental results obtained for the estimation of single-tree stem volume show the effectiveness of the proposed SSL method.

  14. Particle Morphology and Size Results from the Smoke Aerosol Measurement Experiment-2

    NASA Technical Reports Server (NTRS)

    Urban, David L.; Ruff, Gary A.; Greenberg, Paul S.; Fischer, David; Meyer, Marit; Mulholland, George; Yuan, Zeng-Guang; Bryg, Victoria; Cleary, Thomas; Yang, Jiann

    2012-01-01

    Results are presented from the Reflight of the Smoke Aerosol Measurement Experiment (SAME-2) which was conducted during Expedition 24 (July-September 2010). The reflight experiment built upon the results of the original flight during Expedition 15 by adding diagnostic measurements and expanding the test matrix. Five different materials representative of those found in spacecraft (Teflon, Kapton, cotton, silicone rubber and Pyrell) were heated to temperatures below the ignition point with conditions controlled to provide repeatable sample surface temperatures and air flow. The air flow past the sample during the heating period ranged from quiescent to 8 cm/s. The smoke was initially collected in an aging chamber to simulate the transport time from the smoke source to the detector. This effective transport time was varied by holding the smoke in the aging chamber for times ranging from 11 to 1800 s. Smoke particle samples were collected on Transmission Electron Microscope (TEM) grids for post-flight analysis. The TEM grids were analyzed to observe the particle morphology and size parameters. The diagnostics included a prototype two-moment smoke detector and three different measures of moments of the particle size distribution. These moment diagnostics were used to determine the particle number concentration (zeroth moment), the diameter concentration (first moment), and the mass concentration (third moment). These statistics were combined to determine the diameter of average mass and the count mean diameter and, by assuming a log-normal distribution, the geometric mean diameter and the geometric standard deviations can also be calculated. Overall the majority of the average smoke particle sizes were found to be in the 200 nm to 400 nm range with the quiescent cases producing some cases with substantially larger particles.

  15. Determination of the Thermal Properties of Sands as Affected by Water Content, Drainage/Wetting, and Porosity Conditions for Sands With Different Grain Sizes

    NASA Astrophysics Data System (ADS)

    Smits, K. M.; Sakaki, T.; Limsuwat, A.; Illangasekare, T. H.

    2009-05-01

    It is widely recognized that liquid water, water vapor and temperature movement in the subsurface near the land/atmosphere interface are strongly coupled, influencing many agricultural, biological and engineering applications such as irrigation practices, the assessment of contaminant transport and the detection of buried landmines. In these systems, a clear understanding of how variations in water content, soil drainage/wetting history, porosity conditions and grain size affect the soil's thermal behavior is needed, however, the consideration of all factors is rare as very few experimental data showing the effects of these variations are available. In this study, the effect of soil moisture, drainage/wetting history, and porosity on the thermal conductivity of sandy soils with different grain sizes was investigated. For this experimental investigation, several recent sensor based technologies were compiled into a Tempe cell modified to have a network of sampling ports, continuously monitoring water saturation, capillary pressure, temperature, and soil thermal properties. The water table was established at mid elevation of the cell and then lowered slowly. The initially saturated soil sample was subjected to slow drainage, wetting, and secondary drainage cycles. After liquid water drainage ceased, evaporation was induced at the surface to remove soil moisture from the sample to obtain thermal conductivity data below the residual saturation. For the test soils studied, thermal conductivity increased with increasing moisture content, soil density and grain size while thermal conductivity values were similar for soil drying/wetting behavior. Thermal properties measured in this study were then compared with independent estimates made using empirical models from literature. These soils will be used in a proposed set of experiments in intermediate scale test tanks to obtain data to validate methods and modeling tools used for landmine detection.

  16. Accounting for twin births in sample size calculations for randomised trials.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J

    2018-05-04

    Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.

  17. [Estimated prevalence of autism spectrum disorders in the Canary Islands].

    PubMed

    Fortea Sevilla, M S; Escandell Bermúdez, M O; Castro Sánchez, J J

    2013-12-01

    To make an initial estimate of the prevalence of autism spectrum disorders (ASDs) among children in the province of Las Palmas (Spain). Descriptive study was conducted on 1,796 children between the ages of 18 and 30 months of age, all part of the Child Health Surveillance of the Canary Islands, more specifically the province of Las Palmas, with a population of 1,090,605. The parents of children involved completed the Spanish version of the Modified Checklist for Autism in Toddlers (M-CHAT/ES) in the paediatric clinic. The positive cases were then diagnosed by experts by means of the Autism Diagnostic Interview-Revised (ADIR) and the Autism Diagnostic Observation Schedule (ADOS). A 0.61% prevalence of ASDs was determined, similar to that reported in previous studies using the same tools. The ratio was six girls for every five boys. This was contrary to the results of previous studies which suggested more boys than girls were affected. This may have been due to the sample size, which will have to be increased in future studies to confirm this outcome. An increased sample size and also spread to other age ranges should be used in order to obtain a more reliable estimate of prevalence. As regards the gender ratio, this could be a result of the small size of the sample researched, and should therefore be confirmed by further studies. Copyright © 2012 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.

  18. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  19. Comparison and Field Validation of Binomial Sampling Plans for Oligonychus perseae (Acari: Tetranychidae) on Hass Avocado in Southern California.

    PubMed

    Lara, Jesus R; Hoddle, Mark S

    2015-08-01

    Oligonychus perseae Tuttle, Baker, & Abatiello is a foliar pest of 'Hass' avocados [Persea americana Miller (Lauraceae)]. The recommended action threshold is 50-100 motile mites per leaf, but this count range and other ecological factors associated with O. perseae infestations limit the application of enumerative sampling plans in the field. Consequently, a comprehensive modeling approach was implemented to compare the practical application of various binomial sampling models for decision-making of O. perseae in California. An initial set of sequential binomial sampling models were developed using three mean-proportion modeling techniques (i.e., Taylor's power law, maximum likelihood, and an empirical model) in combination with two-leaf infestation tally thresholds of either one or two mites. Model performance was evaluated using a robust mite count database consisting of >20,000 Hass avocado leaves infested with varying densities of O. perseae and collected from multiple locations. Operating characteristic and average sample number results for sequential binomial models were used as the basis to develop and validate a standardized fixed-size binomial sampling model with guidelines on sample tree and leaf selection within blocks of avocado trees. This final validated model requires a leaf sampling cost of 30 leaves and takes into account the spatial dynamics of O. perseae to make reliable mite density classifications for a 50-mite action threshold. Recommendations for implementing this fixed-size binomial sampling plan to assess densities of O. perseae in commercial California avocado orchards are discussed. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Public Opinion Polls, Chicken Soup and Sample Size

    ERIC Educational Resources Information Center

    Nguyen, Phung

    2005-01-01

    Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.

  1. Cascades in the Threshold Model for varying system sizes

    NASA Astrophysics Data System (ADS)

    Karampourniotis, Panagiotis; Sreenivasan, Sameet; Szymanski, Boleslaw; Korniss, Gyorgy

    2015-03-01

    A classical model in opinion dynamics is the Threshold Model (TM) aiming to model the spread of a new opinion based on the social drive of peer pressure. Under the TM a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. Cascades in the TM depend on multiple parameters, such as the number and selection strategy of the initially active nodes (initiators), and the threshold distribution of the nodes. For a uniform threshold in the network there is a critical fraction of initiators for which a transition from small to large cascades occurs, which for ER graphs is largerly independent of the system size. Here, we study the spread contribution of each newly assigned initiator under the TM for different initiator selection strategies for synthetic graphs of various sizes. We observe that for ER graphs when large cascades occur, the spread contribution of the added initiator on the transition point is independent of the system size, while the contribution of the rest of the initiators converges to zero at infinite system size. This property is used for the identification of large transitions for various threshold distributions. Supported in part by ARL NS-CTA, ARO, ONR, and DARPA.

  2. Analyzing hidden populations online: topic, emotion, and social network of HIV-related users in the largest Chinese online community.

    PubMed

    Liu, Chuchu; Lu, Xin

    2018-01-05

    Traditional survey methods are limited in the study of hidden populations due to the hard to access properties, including lack of a sampling frame, sensitivity issue, reporting error, small sample size, etc. The rapid increase of online communities, of which members interact with others via the Internet, have generated large amounts of data, offering new opportunities for understanding hidden populations with unprecedented sample sizes and richness of information. In this study, we try to understand the multidimensional characteristics of a hidden population by analyzing the massive data generated in the online community. By elaborately designing crawlers, we retrieved a complete dataset from the "HIV bar," the largest bar related to HIV on the Baidu Tieba platform, for all records from January 2005 to August 2016. Through natural language processing and social network analysis, we explored the psychology, behavior and demand of online HIV population and examined the network community structure. In HIV communities, the average topic similarity among members is positively correlated to network efficiency (r = 0.70, p < 0.001), indicating that the closer the social distance between members of the community, the more similar their topics. The proportion of negative users in each community is around 60%, weakly correlated with community size (r = 0.25, p = 0.002). It is found that users suspecting initial HIV infection or first in contact with high-risk behaviors tend to seek help and advice on the social networking platform, rather than immediately going to a hospital for blood tests. Online communities have generated copious amounts of data offering new opportunities for understanding hidden populations with unprecedented sample sizes and richness of information. It is recommended that support through online services for HIV/AIDS consultation and diagnosis be improved to avoid privacy concerns and social discrimination in China.

  3. Experimental Investigation of Shock Initiation in Mixtures of Manganese and Sulfur

    NASA Astrophysics Data System (ADS)

    Jette, Francois-Xavier; Goroshin, Sam; Higgins, Andrew

    2009-06-01

    Equimolar mixtures of manganese powder and sulfur at different initial densities were tested in two different types of steel recovery capsules in order to study the shock initiation phenomenon in SHS mixtures. This mixture composition was selected due to the large exothermic heat release of the manganese-sulfur reaction (214 kJ/mol), which causes the reaction to be self-sustaining once initiated. Two different sizes of Mn particles were used for these experiments, 1-5 μm and -325 mesh (44μm or less). The test samples were placed in planar recovery ampoules and a strong shock was delivered via the detonation of a charge of amine-sensitized nitromethane. Various shock strengths were achieved by placing different thicknesses of PMMA attenuator discs between the explosive charge and the ampoule. The results confirmed that shock-induced reactions can be produced in highly non-porous mixtures. It was also found that shock interactions with the side walls of the recovery capsule can play a significant role in the initiation, and that mixtures containing the larger Mn particles were very difficult to initiate in the absence of shock interactions with the capsule walls.

  4. Source contributions to PM10 and arsenic concentrations in Central Chile using positive matrix factorization

    NASA Astrophysics Data System (ADS)

    Hedberg, Emma; Gidhagen, Lars; Johansson, Christer

    Sampling of particles (PM10) was conducted during a one-year period at two rural sites in Central Chile, Quillota and Linares. The samples were analyzed for elemental composition. The data sets have undergone source-receptor analyses in order to estimate the sources and their abundance's in the PM10 size fraction, by using the factor analytical method positive matrix factorization (PMF). The analysis showed that PM10 was dominated by soil resuspension at both sites during the summer months, while during winter traffic dominated the particle mass at Quillota and local wood burning dominated the particle mass at Linares. Two copper smelters impacted the Quillota station, and contributed to 10% and 16% of PM10 as an average during summer and winter, respectively. One smelter impacted Linares by 8% and 19% of PM10 in the summer and winter, respectively. For arsenic the two smelters accounted for 87% of the monitored arsenic levels at Quillota and at Linares one smelter contributed with 72% of the measured mass. In comparison with PMF, the use of a dispersion model tended to overestimate the smelter contribution to arsenic levels at both sites. The robustness of the PMF model was tested by using randomly reduced data sets, where 85%, 70%, 50% and 33% of the samples were included. In this way the ability of the model to reconstruct the sources initially found by the original data set could be tested. On average for all sources the relative standard deviation increased from 7% to 25% for the variables identifying the sources, when decreasing the data set from 85% to 33% of the samples, indicating that the solution initially found was very stable to begin with. But it was also noted that sources due to industrial or combustion processes were more sensitive for the size of the data set, compared to the natural sources as local soil and sea spray sources.

  5. Effects of Initial Powder Size on the Mechanical Properties and Microstructure of As-Extruded GRCop-84

    NASA Technical Reports Server (NTRS)

    Okoro, Chika L.

    2004-01-01

    GRCop-84 was developed to meet the mechanical and thermal property requirements for advanced regeneratively cooled rocket engine main combustion chamber liners. It is a ternary Cu- Cr-Nb alloy having approximately 8 at% Cr and 4 at% Nb. The chromium and niobium constituents combine to form 14 vol% Cr2Nb, the strengthening phase. The alloy is made by producing GRCop-84 powder through gas atomization and consolidating the powder using extrusion, hot isostatic pressing (HIP) or vacuum plasma spraying (VPS). GRCop-84 has been selected by Rocketdyne, Ratt & Wlutney and Aerojet for use in their next generation of rocket engines. GRCop-84 demonstrates favorable mechanical and thermal properties at elevated temperatures. Compared to NARloy-Z, the currently used inaterial in the Space Shuttle, GRCop-84 has approximately twice the yield strength, 10-1000 times the creep life, and 1.5-2.5 times the low cycle fatigue life. The thermal expansion of GRCop-84 is 7515% less than NARloy-Z which minimizes thermally induced stresses. The thermal conductivity of the two alloys is comparable at low temperature but NARloy-Z has a 20-50 W/mK thermal conductivity advantage at typical rocket engine hot wall temperatures. GRCop-84 is also much more microstructurally stable than NARloy-Z which translates into better long term stability of mechanical properties. Previous research into metal alloys fabricated by means of powder metallurgy (PM), has demonstrated that initial powder size can affect the microstructural development and mechanical properties of such materials. Grain size, strength, ductility, size of second phases, etc., have all been shown to vary with starting powder size in PM-alloys. This work focuses on characterizing the effect of varying starting powder size on the microstructural evolution and mechanical properties of as- extruded GRCop-84. Tensile tests and constant load creep tests were performed on extrusions of four powder meshes: +140 mesh (great3er than l05 micron powder size), -140 mesh (less than or equal to 105 microns), -140 plus or minus 270 (53 - 105 microns), and - 270 mesh (less than or equal to 53 microns). Samples were tested in tension at room temperature and at 500 C (932 F). Creep tests were performed under vacuum at 500 C using a stress of 111 MPa (16.1 ksi). The fracture surfaces of selected samples from both tests were studied using a Scanning Electron Microscope (SEM). The as-extruded materials were also studied, using both optical microscopy and SEM analysis, to characterize changes within the microstructure.

  6. Effects of Grain Size and Twin Layer Thickness on Crack Initiation at Twin Boundaries.

    PubMed

    Zhou, Piao; Zhou, Jianqiu; Zhu, Yongwei; Jiang, E; Wang, Zikun

    2018-04-01

    A theoretical model to explore the effect on crack initiation of nanotwinned materials was proposed based on the accumulation of dislocations at twin boundaries. First, a critical cracking initiation condition was established considering the number of dislocations pill-up at TBs, grain size and twin layer thickness, and a semi-quantitative relationship between the crystallographic orientation and the stacking fault energy was built. In addition, the number of dislocations pill-up was described by introducing the theory of strain gradient. Based on this model, the effects of grain size and twin lamellae thickness on dislocation density and crack initiation at twin boundaries were also discussed. The simulation results demonstrated that the crack initiation resistance can be improved by decreasing the grain size and increasing the twin lamellae, which keeps in agreement with recent experimental findings reported in the literature.

  7. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  8. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  9. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  10. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Fracture morphologies of carbon-black-loaded SBR (styrene-butadiene rubber) subjected to low-cycle, high-stress fatigue. [Styrene-butadiene rubber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, A.; Lesuer, D.R.; Patt, J.

    Experimental results, together with an analytical model, related to the loss in tensile strength of styrene-butadiene rubber (SBR) loaded with carbon black (CB) that had been subjected to low-cycle, high-stress fatigue tests were presented in a prior paper. The drop in tensile strength relative to that of a virgin sample was considered to be a measure of damage induced during the fatigue test. The present paper is a continuation of this study dealing with the morphological interpretations of the fractured surfaces, whereby the cyclic-tearing behavior, resulting in the damage, is related to the test and material parameters. It was foundmore » that failure is almost always initiated in the bulk of a sample at a material flaw. The size and definition of a flaw increase with an increase in carbon-black loading. Initiation flaw sites are enveloped by fan-shaped or penny-shaped regions which develop during cycling. The size and morphology of a fatigue-tear region appears to be independent of the fatigue load or the extent of the damage (strength loss). By contrast, either an increase in cycling load or an increase in damage at constant load increases the definition of the fatigue-region morphology for all formulations of carbon-black. On the finest scale, the morphology can be described in terms of tearing of individual groups of rubber strands, collapsing to form a cell-like structure. 18 refs., 13 figs.« less

  12. Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.

    PubMed

    McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M

    2015-03-01

    Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.

  13. Bounds on the sample complexity for private learning and private data release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kasiviswanathan, Shiva; Beime, Amos; Nissim, Kobbi

    2009-01-01

    Learning is a task that generalizes many of the analyses that are applied to collections of data, and in particular, collections of sensitive individual information. Hence, it is natural to ask what can be learned while preserving individual privacy. [Kasiviswanathan, Lee, Nissim, Raskhodnikova, and Smith; FOCS 2008] initiated such a discussion. They formalized the notion of private learning, as a combination of PAC learning and differential privacy, and investigated what concept classes can be learned privately. Somewhat surprisingly, they showed that, ignoring time complexity, every PAC learning task could be performed privately with polynomially many samples, and in many naturalmore » cases this could even be done in polynomial time. While these results seem to equate non-private and private learning, there is still a significant gap: the sample complexity of (non-private) PAC learning is crisply characterized in terms of the VC-dimension of the concept class, whereas this relationship is lost in the constructions of private learners, which exhibit, generally, a higher sample complexity. Looking into this gap, we examine several private learning tasks and give tight bounds on their sample complexity. In particular, we show strong separations between sample complexities of proper and improper private learners (such separation does not exist for non-private learners), and between sample complexities of efficient and inefficient proper private learners. Our results show that VC-dimension is not the right measure for characterizing the sample complexity of proper private learning. We also examine the task of private data release (as initiated by [Blum, Ligett, and Roth; STOC 2008]), and give new lower bounds on the sample complexity. Our results show that the logarithmic dependence on size of the instance space is essential for private data release.« less

  14. Improving the Yield of Histological Sampling in Patients With Suspected Colorectal Cancer During Colonoscopy by Introducing a Colonoscopy Quality Assurance Program.

    PubMed

    Gado, Ahmed; Ebeid, Basel; Abdelmohsen, Aida; Axon, Anthony

    2011-08-01

    Masses discovered by clinical examination, imaging or endoscopic studies that are suspicious for malignancy typically require biopsy confirmation before treatment is initiated. Biopsy specimens may fail to yield a definitive diagnosis if the lesion is extensively ulcerated or otherwise necrotic and viable tumor tissue is not obtained on sampling. The diagnostic yield is improved when multiple biopsy samples (BSs) are taken. A colonoscopy quality-assurance program (CQAP) was instituted in 2003 in our institution. The aim of this study was to determine the effect of instituting a CQAP on the yield of histological sampling in patients with suspected colorectal cancer (CRC) during colonoscopy. Initial assessment of colonoscopy practice was performed in 2003. A total of five patients with suspected CRC during colonoscopy were documented in 2003. BSs confirmed CRC in three (60%) patients and were nondiagnostic in two (40%). A quality-improvement process was instituted which required a minimum six BSs with adequate size of the samples from any suspected CRC during colonoscopy. A total of 37 patients for the period 2004-2010 were prospectively assessed. The diagnosis of CRC was confirmed with histological examination of BSs obtained during colonoscopy in 63% of patients in 2004, 60% in 2005, 50% in 2006, 67% in 2007, 100% in 2008, 67% in 2009 and 100% in 2010. The yield of histological sampling increased significantly ( p <0.02) from 61% in 2004-2007 to 92% in 2008-2010. The implementation of a quality assurance and improvement program increased the yield of histological sampling in patients with suspected CRC during colonoscopy.

  15. Statistical distribution of time to crack initiation and initial crack size using service data

    NASA Technical Reports Server (NTRS)

    Heller, R. A.; Yang, J. N.

    1977-01-01

    Crack growth inspection data gathered during the service life of the C-130 Hercules airplane were used in conjunction with a crack propagation rule to estimate the distribution of crack initiation times and of initial crack sizes. A Bayesian statistical approach was used to calculate the fraction of undetected initiation times as a function of the inspection time and the reliability of the inspection procedure used.

  16. The role of underestimating body size for self-esteem and self-efficacy among grade five children in Canada.

    PubMed

    Maximova, Katerina; Khan, Mohammad K A; Austin, S Bryn; Kirk, Sara F L; Veugelers, Paul J

    2015-10-01

    Underestimating body size hinders healthy behavior modification needed to prevent obesity. However, initiatives to improve body size misperceptions may have detrimental consequences on self-esteem and self-efficacy. Using sex-specific multiple mixed-effect logistic regression models, we examined the association of underestimating versus accurate body size perceptions with self-esteem and self-efficacy in a provincially representative sample of 5075 grade five school children. Body size perceptions were defined as the standardized difference between the body mass index (BMI, from measured height and weight) and self-perceived body size (Stunkard body rating scale). Self-esteem and self-efficacy for physical activity and healthy eating were self-reported. Most of overweight boys and girls (91% and 83%); and most of obese boys and girls (93% and 90%) underestimated body size. Underestimating weight was associated with greater self-efficacy for physical activity and healthy eating among normal-weight children (odds ratio: 1.9 and 1.6 for boys, 1.5 and 1.4 for girls) and greater self-esteem among overweight and obese children (odds ratio: 2.0 and 6.2 for boys, 2.0 and 3.4 for girls). Results highlight the importance of developing optimal intervention strategies as part of targeted obesity prevention efforts that de-emphasize the focus on body weight, while improving body size perceptions. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. The Effect of Defects on the Fatigue Initiation Process in Two P/M Superalloys.

    DTIC Science & Technology

    1980-09-01

    determine the effect of Sdfect size, shape, and population on the fatigue initiation process in two high strength P/M superalloys, AF-l5 and AF2-lDA. The...to systematically determine the effects of defect size, shape, and population on fatigue. It is true that certain trends have been established...to determine the relative effects of defect size, shape, and population on the crack initiation life of a representative engineering material

  18. Chemical and physical properties affecting strontium distribution coefficients of surficial-sediment samples at the Idaho National Engineering and Environmental Laboratory, Idaho

    USGS Publications Warehouse

    Liszewski, M.J.; Rosentreter, J.J.; Miller, Karl E.; Bartholomay, R.C.

    2000-01-01

    The U.S. Geological Survey and Idaho State University, in cooperation with the U.S. Department of Energy, conducted a study to determine strontium distribution coefficients (K(d)s) of surficial sediments at the Idaho National Engineering and Environmental Laboratory (INEEL). Batch experiments using synthesized aqueous solutions were used to determine K(d)s, which describe the distribution of a solute between the solution and solid phase, of 20 surficial-sediment samples from the INEEL. The K(d)s for the 20 surficial-sediment samples ranged from 36 to 275 ml/g. Many properties of both the synthesized aqueous solutions and sediments used in the experiments also were determined. Solution properties determined were initial and equilibrium concentrations of calcium, magnesium, and strontium, pH and specific conductance, and initial concentrations of potassium and sodium. Sediment properties determined were grain-size distribution, bulk mineralogy, whole-rock major-oxide and strontium and barium concentrations, and Brunauer-Emmett-Teller (BET) surface area. Solution and sediment properties were correlated with strontium K(d)s of the 20 surficial sediments using Pearson correlation coefficients. Solution properties with the strongest correlations with strontium K(d)s were equilibrium pH and equilibrium calcium concentration correlation coefficients, 0.6598 and -0.6518, respectively. Sediment properties with the strongest correlations with strontium K(d)s were manganese oxide (MnO), BET surface area, and the >4.75-mm-grain-size fraction correlation coefficients, 0.7054, 0.7022, and -0.6660, respectively. Effects of solution properties on strontium K(d)s were interpreted as being due to competition among similarly charged and sized cations in solution for strontium-sorption sites; effects of sediment properties on strontium K(d)s were interpreted as being surface-area related. Multivariate analyses of these solution and sediment properties resulted in r2 values of 0.8071 when all five properties were used and 0.8043 when three properties, equilibrium pH, MnO, and BET surface area, were used.

  19. The sonic window: second generation results

    NASA Astrophysics Data System (ADS)

    Walker, William F.; Fuller, Michael I.; Brush, Edward V.; Eames, Matthew D. C.; Owen, Kevin; Ranganathan, Karthik; Blalock, Travis N.; Hossack, John A.

    2006-03-01

    Medical Ultrasound Imaging is widely used clinically because of its relatively low cost, portability, lack of ionizing radiation, and real-time nature. However, even with these advantages ultrasound has failed to permeate the broad array of clinical applications where its use could be of value. A prime example of this untapped potential is the routine use of ultrasound to guide intravenous access. In this particular application existing systems lack the required portability, low cost, and ease-of-use required for widespread acceptance. Our team has been working for a number of years to develop an extremely low-cost, pocket-sized, and intuitive ultrasound imaging system that we refer to as the "Sonic Window." We have previously described the first generation Sonic Window prototype that was a bench-top device using a 1024 element, fully populated array operating at a center frequency of 3.3 MHz. Through a high degree of custom front-end integration combined with multiplexing down to a 2 channel PC based digitizer this system acquired a full set of RF data over a course of 512 transmit events. While initial results were encouraging, this system exhibited limitations resulting from low SNR, relatively coarse array sampling, and relatively slow data acquisition. We have recently begun assembling a second-generation Sonic Window system. This system uses a 3600 element fully sampled array operating at 5.0 MHz with a 300 micron element pitch. This system extends the integration of the first generation system to include front-end protection, pre-amplification, a programmable bandpass filter, four sample and holds, and four A/D converters for all 3600 channels in a set of custom integrated circuits with a combined area smaller than the 1.8 x 1.8 cm footprint of the transducer array. We present initial results from this front-end and present benchmark results from a software beamformer implemented on the Analog Devices BF-561 DSP. We discuss our immediate plans for further integration and testing. This second prototype represents a major reduction in size and forms the foundation of a fully functional, fully integrated, pocket sized prototype.

  20. Kinetic study of ferronickel slag grinding at variation of ball filling and ratio of feed to grinding balls

    NASA Astrophysics Data System (ADS)

    Sanwani, Edy; Ikhwanto, Muhammad

    2017-01-01

    The objective of this paper is to investigate the effect of ball filling and ratio of feed to grinding balls on the kinetic of grinding of ferronickel slag in a laboratory scale ball mill. The experiments were started by crushing the ferronickel slag samples using a roll crusher to produce -3 mesh (-6.7 mm) product. This product, after sampling and sample dividing processes, was then used as feed for grinding process. The grinding was performed with variations of ball filling and ratio of feed to grinding balls for 150 minutes. At every certain time interval, particle size analysis was carried out on the grinding product. The results of the experiments were also used to develop linear regression model of the effect of grinding variables on the P80 of the product. Based on this study, it was shown that P80 values of the grinding products declined sharply until 70 minutes of grinding time due to the dominant mechanism of impact breakage and then decreased slowly after 70 minutes until 150 minutes of grinding time due to dominant mechanism of attrition breakage. Kinetics study of the grinding process on variations of grinding ball filling showed that the optimum rate of formation of fine particles for 20%, 30%, 40% and 50% mill volume was achieved at a particle size of 400 µm in which the best initial rate of formation occurred at 50% volume of mill. At the variations of ratio of feed to grinding balls it was shown that the optimum rate of grinding for the ratio of 1:10, 1: 8 and 1: 6 was achieved at a particle size of 400 µm and for the ratio of 1: 4 was at 841 µm in which the best initial rate of formation occurred at a 1:10 ratio. In this study, it was also produced two regression models that can predict the P80 value of the grinding product as a function of the variables of grinding time, ball filling and the ratio of the feed to grinding balls.

  1. Laser-induced superhydrophobic grid patterns on PDMS for droplet arrays formation

    NASA Astrophysics Data System (ADS)

    Farshchian, Bahador; Gatabi, Javad R.; Bernick, Steven M.; Park, Sooyeon; Lee, Gwan-Hyoung; Droopad, Ravindranath; Kim, Namwon

    2017-02-01

    We demonstrate a facile single step laser treatment process to render a polydimethylsiloxane (PDMS) surface superhydrophobic. By synchronizing a pulsed nanosecond laser source with a motorized stage, superhydrophobic grid patterns were written on the surface of PDMS. Hierarchical micro and nanostructures were formed in the irradiated areas while non-irradiated areas were covered by nanostructures due to deposition of ablated particles. Arrays of droplets form spontaneously on the laser-patterned PDMS with superhydrophobic grid pattern when the PDMS sample is simply immersed in and withdrawn from water due to different wetting properties of the irradiated and non-irradiated areas. The effects of withdrawal speed and pitch size of superhydrophobic grid on the size of formed droplets were investigated experimentally. The droplet size increases initially with increasing the withdrawal speed and then does not change significantly beyond certain points. Moreover, larger droplets are formed by increasing the pitch size of the superhydrophobic grid. The droplet arrays formed on the laser-patterned PDMS with wettability contrast can be used potentially for patterning of particles, chemicals, and bio-molecules and also for cell screening applications.

  2. An analysis of adaptive design variations on the sequential parallel comparison design for clinical trials.

    PubMed

    Mi, Michael Y; Betensky, Rebecca A

    2013-04-01

    Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample-size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Because the basic SPCD already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and whether we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample-size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample-size re-estimation, up to 25% power was recovered from underestimated sample-size scenarios. Given the numerous possible test parameters that could have been chosen for the simulations, the study's results are limited to situations described by the parameters that were used and may not generalize to all possible scenarios. Furthermore, dropout of patients is not considered in this study. It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments.

  3. An analysis of adaptive design variations on the sequential parallel comparison design for clinical trials

    PubMed Central

    Mi, Michael Y.; Betensky, Rebecca A.

    2013-01-01

    Background Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Purpose Because the basic SPCD design already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and if we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Methods Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. Results From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample size re-estimation, up to 25% power was recovered from underestimated sample size scenarios. Limitations Given the numerous possible test parameters that could have been chosen for the simulations, the study’s results are limited to situations described by the parameters that were used, and may not generalize to all possible scenarios. Furthermore, drop-out of patients is not considered in this study. Conclusions It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments. PMID:23283576

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pahl, R. J.; Trott, W. M.; Snedigar, S.

    A series of gas gun tests has been performed to examine contributions to energy release from micron-sized and nanometric aluminum powder added to sieved (212-300{mu}m) HMX. In the absence of added metal, 4-mm-thick, low-density (64-68% of theoretical maximum density) pressings of the sieved HMX respond to modest shock loading by developing distinctive reactive waves that exhibit both temporal and mesoscale spatial fluctuations. Parallel tests have been performed on samples containing 10% (by mass) aluminum in two particle sizes: 2-{mu}m and 123-nm mean particle diameter, respectively. The finely dispersed aluminum initially suppresses wave growth from HMX reactions; however, after a visiblemore » induction period, the added metal drives rapid increases in the transmitted wave particle velocity. Wave profile variations as a function of the aluminum particle diameter are discussed.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castaneda, Jaime N.; Pahl, Robert J.; Snedigar, Shane

    A series of gas gun tests has been performed to examine contributions to energy release from micron-sized and nanometric aluminum powder added to sieved (212-300{micro}m) HMX. In the absence of added metal, 4-mm-thick, low-density (64-68% of theoretical maximum density) pressings of the sieved HMX respond to modest shock loading by developing distinctive reactive waves that exhibit both temporal and mesoscale spatial fluctuations. Parallel tests have been performed on samples containing 10% (by mass) aluminum in two particle sizes: 2-{micro}m and 123-nm mean particle diameter, respectively. The finely dispersed aluminum initially suppresses wave growth from HMX reactions; however, after a visiblemore » induction period, the added metal drives rapid increases in the transmitted wave particle velocity. Wave profile variations as a function of the aluminum particle diameter are discussed.« less

  6. Grain growth effects on magnetic properties of Ni0.6Zn0.4Fe2O4 material prepared using mechanically alloyed nanoparticles

    NASA Astrophysics Data System (ADS)

    Syazwan, M. M.; Hapishah, A. N.; Azis, R. S.; Abbas, Z.; Hamidon, M. N.

    2018-06-01

    The effect of grain growth via sintering temperature on some magnetic properties is reported in this research. Ni0.6Zn0.4Fe2O4 nanoparticles were mechanically alloyed for 6 h and the sintering process starting from 600 to 1200 °C with 25 °C increment with only one sample subjected to all sintering scheme. The resulting change in the material was observed after each sintering. Single phase has been formed at 600 °C and above and the intensity peaks increased with sintering temperature as well as crystallinity increment. The morphological studies showed grain size increment as the sintering temperature increased. Moreover, the density increased while the porosity decreased with increasing sintering temperature. The saturation induction, Bs increased with the increased of grain size. On the other hand, the coercivity-vs-grain size plot reveals the critical single-domain-to-multidomain grain size to be about ∼400 nm. The initial permeability, μi value was increased with grain size enhancement. The microstructural grain growth, as exposed for the first time by this research, is shown as a process of multiple activation energy barriers.

  7. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. The Effects of Residency and Body Size on Contest Initiation and Outcome in the Territorial Dragon, Ctenophorus decresii

    PubMed Central

    Umbers, Kate D. L.; Osborne, Louise; Keogh, J. Scott

    2012-01-01

    Empirical studies of the determinants of contests have been attempting to unravel the complexity of animal contest behaviour for decades. This complexity requires that experiments incorporate multiple determinants into studies to tease apart their relative effects. In this study we examined the complex contest behaviour of the tawny dragon (Ctenophorus decresii), a territorial agamid lizard, with the specific aim of defining the factors that determine contest outcome. We manipulated the relative size and residency status of lizards in contests to weight their importance in determining contest outcome. We found that size, residency and initiating a fight were all important in determining outcomes of fights. We also tested whether residency or size was important in predicting the status of lizard that initiated a fight. We found that residency was the most important factor in predicting fight initiation. We discuss the effects of size and residency status in context of previous studies on contests in tawny dragons and other animals. Our study provides manipulative behavioural data in support of the overriding effects of residency on initiation fights and winning them. PMID:23077558

  9. Repopulation of calibrations with samples from the target site: effect of the size of the calibration.

    NASA Astrophysics Data System (ADS)

    Guerrero, C.; Zornoza, R.; Gómez, I.; Mataix-Solera, J.; Navarro-Pedreño, J.; Mataix-Beneyto, J.; García-Orenes, F.

    2009-04-01

    Near infrared (NIR) reflectance spectroscopy offers important advantages because is a non-destructive technique, the pre-treatments needed in samples are minimal, and the spectrum of the sample is obtained in less than 1 minute without the needs of chemical reagents. For these reasons, NIR is a fast and cost-effective method. Moreover, NIR allows the analysis of several constituents or parameters simultaneously from the same spectrum once it is obtained. For this, a needed steep is the development of soil spectral libraries (set of samples analysed and scanned) and calibrations (using multivariate techniques). The calibrations should contain the variability of the target site soils in which the calibration is to be used. Many times this premise is not easy to fulfil, especially in libraries recently developed. A classical way to solve this problem is through the repopulation of libraries and the subsequent recalibration of the models. In this work we studied the changes in the accuracy of the predictions as a consequence of the successive addition of samples to repopulation. In general, calibrations with high number of samples and high diversity are desired. But we hypothesized that calibrations with lower quantities of samples (lower size) will absorb more easily the spectral characteristics of the target site. Thus, we suspect that the size of the calibration (model) that will be repopulated could be important. For this reason we also studied this effect in the accuracy of predictions of the repopulated models. In this study we used those spectra of our library which contained data of soil Kjeldahl Nitrogen (NKj) content (near to 1500 samples). First, those spectra from the target site were removed from the spectral library. Then, different quantities of samples of the library were selected (representing the 5, 10, 25, 50, 75 and 100% of the total library). These samples were used to develop calibrations with different sizes (%) of samples. We used partial least squares regression, and leave-one-out cross validation as methods of calibration. Two methods were used to select the different quantities (size of models) of samples: (1) Based on Characteristics of Spectra (BCS), and (2) Based on NKj Values of Samples (BVS). Both methods tried to select representative samples. Each of the calibrations (containing the 5, 10, 25, 50, 75 or 100% of the total samples of the library) was repopulated with samples from the target site and then recalibrated (by leave-one-out cross validation). This procedure was sequential. In each step, 2 samples from the target site were added to the models, and then recalibrated. This process was repeated successively 10 times, being 20 the total number of samples added. A local model was also created with the 20 samples used for repopulation. The repopulated, non-repopulated and local calibrations were used to predict the NKj content in those samples from the target site not included in repopulations. For the measurement of the accuracy of the predictions, the r2, RMSEP and slopes were calculated comparing predicted with analysed NKj values. This scheme was repeated for each of the four target sites studied. In general, scarce differences can be found between results obtained with BCS and BVS models. We observed that the repopulation of models increased the r2 of the predictions in sites 1 and 3. The repopulation caused scarce changes of the r2 of the predictions in sites 2 and 4, maybe due to the high initial values (using non-repopulated models r2 >0.90). As consequence of repopulation, the RMSEP decreased in all the sites except in site 2, where a very low RMESP was obtained before the repopulation (0.4 g×kg-1). The slopes trended to approximate to 1, but this value was reached only in site 4 and after the repopulation with 20 samples. In sites 3 and 4, accurate predictions were obtained using the local models. Predictions obtained with models using similar size of samples (similar %) were averaged with the aim to describe the main patterns. The r2 of predictions obtained with models of higher size were not more accurate than those obtained with models of lower size. After repopulation, the RMSEP of predictions using models with lower sizes (5, 10 and 25% of samples of the library) were lower than RMSEP obtained with higher sizes (75 and 100%), indicating that small models can easily integrate the variability of the soils from the target site. The results suggest that calibrations of small size could be repopulated and "converted" in local calibrations. According to this, we can focus most of the efforts in the obtainment of highly accurate analytical values in a reduced set of samples (including some samples from the target sites). The patterns observed here are in opposition with the idea of global models. These results could encourage the expansion of this technique, because very large data based seems not to be needed. Future studies with very different samples will help to confirm the robustness of the patterns observed. Authors acknowledge to "Bancaja-UMH" for the financial support of the project "NIRPROS".

  10. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  11. Sample size calculations for randomized clinical trials published in anesthesiology journals: a comparison of 2010 versus 2016.

    PubMed

    Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M

    2018-06-01

    Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.

  12. Comet 81p/Wild 2: The Updated Stardust Coma Dust Fluence Measurement for Smaller (Sub 10-Micrometre) Particles

    NASA Technical Reports Server (NTRS)

    Price, M. C.; Kearsley, A. T.; Burchell, M. J.; Horz, Friedrich; Cole, M. J.

    2009-01-01

    Micrometre and smaller scale dust within cometary comae can be observed by telescopic remote sensing spectroscopy [1] and the particle size and abundance can be measured by in situ spacecraft impact detectors [2]. Initial interpretation of the samples returned from comet 81P/Wild 2 by the Stardust spacecraft [3] appears to show that very fine dust contributes not only a small fraction of the solid mass, but is also relatively sparse [4], with a low negative power function describing grain size distribution, contrasting with an apparent abundance indicated by the on-board Dust Flux Monitor Instrument (DFMI) [5] operational during the encounter. For particles above 10 m diameter there is good correspondence between results from the DFMI and the particle size inferred from experimental calibration [6] of measured aerogel track and aluminium foil crater dimensions (as seen in Figure 4 of [4]). However, divergence between data-sets becomes apparent at smaller sizes, especially submicrometre, where the returned sample data are based upon location and measurement of tiny craters found by electron microscopy of Al foils. Here effects of detection efficiency tail-off at each search magnification can be seen in the down-scale flattening of each scale component, but are reliably compensated by sensible extrapolation between segments. There is also no evidence of malfunction in the operation of DFMI during passage through the coma (S. Green, personal comm.), so can the two data sets be reconciled?

  13. Coarsening of Inter- and Intra-granular Proeutectoid Cementite in an Initially Pearlitic 2C-4Cr Ultrahigh Carbon Steel

    NASA Astrophysics Data System (ADS)

    Hecht, Matthew D.; Picard, Yoosuf N.; Webler, Bryan A.

    2017-05-01

    We have examined spheroidization and coarsening of cementite in an initially pearlitic 2C-4Cr ultrahigh carbon steel containing a cementite network. Coarsening kinetics of spheroidized cementite and growth of denuded zones adjacent to the cementite network were investigated by analyzing particle sizes from digital micrographs of water-quenched steel etched with Nital. Denuded zones grew at a rate proportional to t 1/4- t 1/5. Spheroidization of pearlite was completed within 90 minutes at 1073 K and 1173 K (800 °C and 900 °C), and within 5 minutes at 1243 K (970 °C). Bimodal particle size distributions were identified in most of the samples and were more pronounced at higher temperatures and hold times. Peaks in the distributions were attributed to the coarsening of intragranular and grain boundary particles at different rates. A third, non-coarsening peak of particles was present at 1073 K (800 °C) only and was attributed to particles existing prior to the heat treatment. Particle sizes were plotted vs time to investigate possible coarsening mechanisms. The coarsening exponent for the growth of grain boundary carbides was closest to 4, indicating grain boundary diffusion control. The coarsening exponent was closest to 5 for intragranular carbides, indicating suppression of volumetric diffusion (possibly due to reduced effective diffusivity because of Cr alloying) and control by dislocation diffusion.

  14. The Evolution of Grain Size Distribution in Explosive Rock Fragmentation - Sequential Fragmentation Theory Revisited

    NASA Astrophysics Data System (ADS)

    Scheu, B.; Fowler, A. C.

    2015-12-01

    Fragmentation is a ubiquitous phenomenon in many natural and engineering systems. It is the process by which an initially competent medium, solid or liquid, is broken up into a population of constituents. Examples occur in collisions and impacts of asteroids/meteorites, explosion driven fragmentation of munitions on a battlefield, as well as of magma in a volcanic conduit causing explosive volcanic eruptions and break-up of liquid drops. Besides the mechanism of fragmentation the resulting frequency-size distribution of the generated constituents is of central interest. Initially their distributions were fitted empirically using lognormal, Rosin-Rammler and Weibull distributions (e.g. Brown & Wohletz 1995). The sequential fragmentation theory (Brown 1989, Wohletz at al. 1989, Wohletz & Brown 1995) and the application of fractal theory to fragmentation products (Turcotte 1986, Perfect 1997, Perugini & Kueppers 2012) attempt to overcome this shortcoming by providing a more physical basis for the applied distribution. Both rely on an at least partially scale-invariant and thus self-similar random fragmentation process. Here we provide a stochastic model for the evolution of grain size distribution during the explosion process. Our model is based on laboratory experiments in which volcanic rock samples explode naturally when rapidly depressurized from initial pressures of several MPa to ambient conditions. The physics governing this fragmentation process has been successfully modelled and the observed fragmentation pattern could be numerically reproduced (Fowler et al. 2010). The fragmentation of these natural rocks leads to grain size distributions which vary depending on the experimental starting conditions. Our model provides a theoretical description of these different grain size distributions. Our model combines a sequential model of the type outlined by Turcotte (1986), but generalized to cater for the explosive process appropriate here, in particular by including in the description of the fracturing events in which the rock fragments, with a recipe for the production of fines, as observed in the experiments. To our knowledge, this implementation of a deterministic fracturing process into a stochastic (sequential) model is unique, further it provides the model with some forecasting power.

  15. Mexican-American mothers’ initiation and understanding of home oral hygiene for young children

    PubMed Central

    HOEFT, Kristin S.; BARKER, Judith C.; MASTERSON, Erin E.

    2012-01-01

    Purpose To investigate caregiver beliefs and behaviors as key issues in the initiation of home oral hygiene routines. Oral hygiene helps reduce the prevalence of early childhood caries, which is disproportionately high among Mexican-American children. Methods Interviews were conducted with a convenience sample of 48 Mexican-American mothers of young children in a low income, urban neighborhood. Interviews were digitally recorded, translated, transcribed, coded and analyzed using standard qualitative procedures. Results The average age of tooth brushing initiation was 1.8±0.8 years; only a small proportion of parents (13%) initiated oral hygiene in accord with American Dental Association (ADA) recommendations. Mothers initiated 2 forms of oral hygiene: infant oral hygiene and regular tooth brushing. For the 48% of children who participated in infant oral hygiene, mothers were prompted by pediatrician and social service (WIC) professionals. For regular tooth brushing initiation, a set of maternal beliefs exist about when this oral hygiene practice becomes necessary for children. Beliefs are mainly based on a child’s dental maturity, interest, capacity and age/size. Conclusions Most (87%) of the urban Mexican-American mothers in the study do not initiate oral hygiene practices in compliance with ADA recommendations. These findings have implications for educational messages. PMID:19947134

  16. Microstructures and Lattice Preferred Orientations in Experimentally Deformed Granulites

    NASA Astrophysics Data System (ADS)

    Miao, S.; Zhou, Y.

    2017-12-01

    We analysed microstructures and lattice preferred orientations (LPO) on experimentally deformed natural granulites in order to understand the relationship between deformation processes and evolving microstructures. The LPO was measured using the scanning electron microscope (SEM)-based electron backscatter diffraction (EBSD) technique. Microstructures were observed by polarized light microscopy and by orientation contrast in the SEM. Natural granulite samples were collected in the Archean lower crust terrane of North China Craton. This granulite is composed of 59% plagioclase (PI) + 21% clinopyroxene (Cpx) +14% orthopyroxene + 5% opaque minerals+1% quartz. The water contents of bulk rocks were in the range 0.10-0.26 wt.%. The average grain size of PI and Cpx were 240 μm and 220 μm, respectively. These samples were deformed in axial compress tests up to 7%-15% shorting at temperatures ranged from 900 ° to 1150 °. Microstructures results in conjunction with some other parameters such as stress exponents indicated that the samples deformed mainly by intragranular microcracking, twinning and dislocation glide with very little recrystallization. The natural sample, without any macroscopic foliation visible, has a significant initial LPO in Cpx corresponding to an "S-type" fabric with the b[010]maximum normal to a foliation plane. PI also has a pre-existing fabric. We compared the LPO of Cpx and PI of experimentally deformed samples with that of undeformed natural samples. It shows that no clear LPO evolution apart from the initial LPO could be attributed to deformation. Even if at a temperature range (eg. above 1100 °) where partial melting occurs, "S-type" fabrics of Cpx have been remained effectively. Deformation in the dislocation creep regime does not alter the initial LPO nor produce a new pattern. This is consistent with previous results, which stated that large strains, at least more than 25% shortening are necessary to overprint a pre-existing LPO in clinopyroxenes.

  17. Fabrication of Titanium-Niobium-Zirconium-Tantalium Alloy (TNZT) Bioimplant Components with Controllable Porosity by Spark Plasma Sintering

    PubMed Central

    Rechtin, Jack; Torresani, Elisa; Ivanov, Eugene; Olevsky, Eugene

    2018-01-01

    Spark Plasma Sintering (SPS) is used to fabricate Titanium-Niobium-Zirconium-Tantalum alloy (TNZT) powder—based bioimplant components with controllable porosity. The developed densification maps show the effects of final SPS temperature, pressure, holding time, and initial particle size on final sample relative density. Correlations between the final sample density and mechanical properties of the fabricated TNZT components are also investigated and microstructural analysis of the processed material is conducted. A densification model is proposed and used to calculate the TNZT alloy creep activation energy. The obtained experimental data can be utilized for the optimized fabrication of TNZT components with specific microstructural and mechanical properties suitable for biomedical applications. PMID:29364165

  18. Effect of ordering of PtCu₃ nanoparticle structure on the activity and stability for the oxygen reduction reaction.

    PubMed

    Hodnik, Nejc; Jeyabharathi, Chinnaiah; Meier, Josef C; Kostka, Alexander; Phani, Kanala L; Rečnik, Aleksander; Bele, Marjan; Hočevar, Stanko; Gaberšček, Miran; Mayrhofer, Karl J J

    2014-07-21

    In this study the performance enhancement effect of structural ordering for the oxygen reduction reaction (ORR) is systematically studied. Two samples of PtCu3 nanoparticles embedded on a graphitic carbon support are carefully prepared with identical initial composition, particle dispersion and size distribution, yet with different degrees of structural ordering. Thus we can eliminate all coinciding effects and unambiguously relate the improved activity of the ORR and more importantly the enhanced stability to the ordered nanostructure. Interestingly, the electrochemically induced morphological changes are common to both ordered and disordered samples. The observed effect could have a groundbreaking impact on the future directions in the rational design of active and stable platinum alloyed ORR catalysts.

  19. 3-D breast anthropometry of plus-sized women in South Africa.

    PubMed

    Pandarum, Reena; Yu, Winnie; Hunter, Lawrance

    2011-09-01

    Exploratory retail studies in South Africa indicate that plus-sized women experience problems and dissatisfaction with poorly fitting bras. The lack of 3-D anthropometric studies for the plus-size women's bra market initiated this research. 3-D body torso measurements were collected from a convenience sample of 176 plus-sized women in South Africa. 3-D breast measurements extracted from the TC(2) NX12-3-D body scanner 'breast module' software were compared with traditional tape measurements. Regression equations show that the two methods of measurement were highly correlated although, on average, the bra cup size determining factor 'bust minus underbust' obtained from the 3-D method is approximately 11% smaller than that of the manual method. It was concluded that the total bust volume correlated with the quadrant volume (r = 0.81), cup length, bust length and bust prominence, should be selected as the overall measure of bust size and not the traditional bust girth and the underbust measurement. STATEMENT OF RELEVANCE: This study contributes new data and adds to the knowledge base of anthropometry and consumer ergonomics on bra fit and support, published in this, the Ergonomics Journal, by Chen et al. (2010) on bra fit and White et al. (2009) on breast support during overground running.

  20. Optical properties of self-assembled ZnTe quantum dots grown by molecular-beam epitaxy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, C.S.; Lai, Y.J.; Chou, W.C.

    2005-02-01

    The morphology and the size-dependent photoluminescence (PL) spectra of the type-II ZnTe quantum dots (QDs) grown in a ZnSe matrix were obtained. The coverage of ZnTe varied from 2.5 to 3.5 monolayers (MLs). The PL peak energy decreased as the dot size increased. Excitation power and temperature-dependent PL spectra are used to characterize the optical properties of the ZnTe quantum dots. For 2.5- and 3.0-ML samples, the PL peak energy decreased monotonically as the temperature increased. However, for the 3.5-ML sample, the PL peak energy was initially blueshifted and then redshifted as the temperature increased above 40 K. Carrier thermalizationmore » and carrier transfer between QDs are used to explain the experimental data. A model of temperature-dependent linewidth broadening is employed to fit the high-temperature data. The activation energy, which was found by the simple PL intensity quenching model, of the 2.5, 3.0, and 3.5 MLs were determined to be 6.35, 9.40, and 18.87 meV, respectively.« less

  1. In-line monitoring of a pharmaceutical blending process using FT-Raman spectroscopy.

    PubMed

    Vergote, G J; De Beer, T R M; Vervaet, C; Remon, J P; Baeyens, W R G; Diericx, N; Verpoort, F

    2004-03-01

    FT-Raman spectroscopy (in combination with a fibre optic probe) was evaluated as an in-line tool to monitor a blending process of diltiazem hydrochloride pellets and paraffinic wax beads. The mean square of differences (MSD) between two consecutive spectra was used to identify the time required to obtain a homogeneous mixture. A traditional end-sampling thief probe was used to collect samples, followed by HPLC analysis to verify the Raman data. Large variations were seen in the FT-Raman spectra logged during the initial minutes of the blending process using a binary mixture (ratio: 50/50, w/w) of diltiazem pellets and paraffinic wax beads (particle size: 800-1200 microm). The MSD-profiles showed that a homogeneous mixture was obtained after about 15 min blending. HPLC analysis confirmed these observations. The Raman data showed that the mixing kinetics depended on the particle size of the material and on the mixing speed. The results of this study proved that FT-Raman spectroscopy can be successfully implemented as an in-line monitoring tool for blending processes.

  2. Homogeneity tests of clustered diagnostic markers with applications to the BioCycle Study

    PubMed Central

    Tang, Liansheng Larry; Liu, Aiyi; Schisterman, Enrique F.; Zhou, Xiao-Hua; Liu, Catherine Chun-ling

    2014-01-01

    Diagnostic trials often require the use of a homogeneity test among several markers. Such a test may be necessary to determine the power both during the design phase and in the initial analysis stage. However, no formal method is available for the power and sample size calculation when the number of markers is greater than two and marker measurements are clustered in subjects. This article presents two procedures for testing the accuracy among clustered diagnostic markers. The first procedure is a test of homogeneity among continuous markers based on a global null hypothesis of the same accuracy. The result under the alternative provides the explicit distribution for the power and sample size calculation. The second procedure is a simultaneous pairwise comparison test based on weighted areas under the receiver operating characteristic curves. This test is particularly useful if a global difference among markers is found by the homogeneity test. We apply our procedures to the BioCycle Study designed to assess and compare the accuracy of hormone and oxidative stress markers in distinguishing women with ovulatory menstrual cycles from those without. PMID:22733707

  3. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  4. Removal of Contaminant Nanoparticles from Wastewater Produced Via Hydrothermal Carbonization by SPIONs

    NASA Astrophysics Data System (ADS)

    Parsapour, Melika

    Hydrothermal carbonization (HTC) is a chemical approach that can be defined as a combined dehydration and decarboxylation process in a wet state. Briefly, this process is performed by applying elevated temperature (between 180-250°C) and pressure (around 2MPa) to convert biomass from aqueous suspension (e.g. sludge, wastewater, natural products, among other) into three different phases and materials products, including biocoal. Further, during the wet conversion process the high residue content is transformed into nanoparticles that could present well-defined or heterogeneous nanostructure. Although HTC was known for years, it has been focused only recently due to exclusive products properties and cost-effective production. In fact, HTC has been used for sludge and wastewater treatment plants in some developed countries such as Germany. Nowadays, many scientific groups still investigate solid products (e.g. biocoal) from HTC. These studies are related to physico-chemical and biological characterization of HTC's generated materials, as well as their potential uses. However, aqueous products from HTC, which are rich in hydrocarbons derivatives and nanoparticles (NPs), are rarely studied. Thereby, our objective is to study the wastewater generated from HTC applied to samples of either glycerin or sugar. Furthermore, we propose a novel treatment strategy to remove the NPs from the wastewater. In this regard, we have used Superparamagnetic Iron Oxide (SPIONs) due to their unique physico-chemical properties (magnetic properties, adsorption capacity, biocompatibility and eco-friendly degradation) for decontamination of water and wastewater. In this regard, we synthesized two different nanocomposites based on SPIONs to carry out the magnetic removal of existent NPs in the wastewater. For the first case, we synthesized polyethylene-glycol (PEG) coated SPIONS (SPIONs PEG). The second one was a new nanocomposite (SPIONs/GO) obtained from in situ growth of SPIONS over purified graphene oxide (GO), which was afterwards coated with PEG (20000Da), resulting in SPIONs/GO PEG. As GO has various functional groups that have a high valence for absorption of contaminants due to their oxygen content, we assume that SPIONs/GO PEG improves the efficiency of the decontamination process compared to SPIONs PEG alone. Initially, we have characterized the synthetized SPIONs. Fourier Transform Infrared spectroscopy (FT-IR) was used to identify the present functional groups in the SPIONs samples. Atomic Force Microscopy (AFM) and Transmission Electronic Microscopy (TEM) were used to determine the topography and diameter size via high resolution images with fine details of the nanocomposites. Finally Dynamic Light Scattering (DLS) was used to evaluate the size distribution of the SPIONs in distilled water. Also, all wastewater samples were characterized before and after treatment. FT-IR was used to determine the functional groups in initial samples. Ultraviolet-visible spectroscopy (UV-vis) was used to observe the UV absorption of the chemicals. DLS was used for size distribution and density measurement, and morphology investigation was done by AFM technique. The SPIONs which involved the GO due to the presence of oxidizes groups showed a better ordered crystalline structure and a narrower diameter distribution. The glycerin samples treated by SPIONs PEG and SPIONs/GO PEG demonstrated 43% and 38% reduction in contaminant respectively. As for the sugar samples, the reductions were of 33% and 60% respectively. Thus, the obtained results confirm the capability of the nanocomposites to remove the nano contaminant from wastewater samples reasonably. However, the decontamination power of the nanocomposites differs accordingly to the chemical structure of the initial biomass.

  5. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Statistical power calculations for mixed pharmacokinetic study designs using a population approach.

    PubMed

    Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel

    2014-09-01

    Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.

  7. Fully automatic characterization and data collection from crystals of biological macromolecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander

    A fully automatic system has been developed that performs X-ray centring and characterization of, and data collection from, large numbers of cryocooled crystals without human intervention. Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to themore » optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.« less

  8. Evaluation of initial posttrauma cardiovascular levels in association with acute PTSD symptoms following a serious motor vehicle accident.

    PubMed

    Buckley, Beth; Nugent, Nicole; Sledjeski, Eve; Raimonde, A Jay; Spoonster, Eileen; Bogart, Laura M; Delahanty, Douglas L

    2004-08-01

    The present study examined the relationship between heart rate (HR) and blood pressure (BP) levels assessed at multiple time points posttrauma and subsequent acute posttraumatic stress disorder (PTSD) symptoms present at a 1-month follow-up. HR and BP levels were measured in 65 motor vehicle accident (MVA) survivors during Emergency Medical Service transport, upon admission to the trauma unit, for the first 20 min postadmission and on the day of discharge. Hierarchical linear modeling analyses revealed no significant relationships between cardiovascular levels and acute PTSD symptoms. Given the small sample size, these results should be interpreted with caution. However, the present results question the use of initial cardiovascular levels as predictors of subsequent acute PTSD in seriously injured MVA victims.

  9. Critical Parameters of the Initiation Zone for Spontaneous Dynamic Rupture Propagation

    NASA Astrophysics Data System (ADS)

    Galis, M.; Pelties, C.; Kristek, J.; Moczo, P.; Ampuero, J. P.; Mai, P. M.

    2014-12-01

    Numerical simulations of rupture propagation are used to study both earthquake source physics and earthquake ground motion. Under linear slip-weakening friction, artificial procedures are needed to initiate a self-sustained rupture. The concept of an overstressed asperity is often applied, in which the asperity is characterized by its size, shape and overstress. The physical properties of the initiation zone may have significant impact on the resulting dynamic rupture propagation. A trial-and-error approach is often necessary for successful initiation because 2D and 3D theoretical criteria for estimating the critical size of the initiation zone do not provide general rules for designing 3D numerical simulations. Therefore, it is desirable to define guidelines for efficient initiation with minimal artificial effects on rupture propagation. We perform an extensive parameter study using numerical simulations of 3D dynamic rupture propagation assuming a planar fault to examine the critical size of square, circular and elliptical initiation zones as a function of asperity overstress and background stress. For a fixed overstress, we discover that the area of the initiation zone is more important for the nucleation process than its shape. Comparing our numerical results with published theoretical estimates, we find that the estimates by Uenishi & Rice (2004) are applicable to configurations with low background stress and small overstress. None of the published estimates are consistent with numerical results for configurations with high background stress. We therefore derive new equations to estimate the initiation zone size in environments with high background stress. Our results provide guidelines for defining the size of the initiation zone and overstress with minimal effects on the subsequent spontaneous rupture propagation.

  10. Pore formation during dehydration of polycrystalline gypsum observed and quantified in a time-series synchrotron radiation based X-ray micro-tomography experiment

    NASA Astrophysics Data System (ADS)

    Fusseis, F.; Schrank, C.; Liu, J.; Karrech, A.; Llana-Fúnez, S.; Xiao, X.; Regenauer-Lieb, K.

    2011-10-01

    We conducted an in-situ X-ray micro-computed tomography heating experiment at the Advanced Photon Source (USA) to dehydrate an unconfined 2.3 mm diameter cylinder of Volterra Gypsum. We used a purpose-built X-ray transparent furnace to heat the sample to 388 K for a total of 310 min to acquire a three-dimensional time-series tomography dataset comprising nine time steps. The voxel size of 2.2 μm3 proved sufficient to pinpoint reaction initiation and the organization of drainage architecture in space and time. We observed that dehydration commences across a narrow front, which propagates from the margins to the centre of the sample in more than four hours. The advance of this front can be fitted with a square-root function, implying that the initiation of the reaction in the sample can be described as a diffusion process. Novel parallelized computer codes allow quantifying the geometry of the porosity and the drainage architecture from the very large tomographic datasets (6.4 × 109 voxel each) in unprecedented detail. We determined position, volume, shape and orientation of each resolvable pore and tracked these properties over the duration of the experiment. We found that the pore-size distribution follows a power law. Pores tend to be anisotropic but rarely crack-shaped and have a preferred orientation, likely controlled by a pre-existing fabric in the sample. With on-going dehydration, pores coalesce into a single interconnected pore cluster that is connected to the surface of the sample cylinder and provides an effective drainage pathway. Our observations can be summarized in a model in which gypsum is stabilized by thermal expansion stresses and locally increased pore fluid pressures until the dehydration front approaches to within about 100 μm. Then, the internal stresses are released and dehydration happens efficiently, resulting in new pore space. Pressure release, the production of pores and the advance of the front are coupled in a feedback loop. We discuss our findings in the context of previous studies.

  11. High Diversity, Low Disparity and Small Body Size in Plesiosaurs (Reptilia, Sauropterygia) from the Triassic–Jurassic Boundary

    PubMed Central

    Benson, Roger B. J.; Evans, Mark; Druckenmiller, Patrick S.

    2012-01-01

    Invasion of the open ocean by tetrapods represents a major evolutionary transition that occurred independently in cetaceans, mosasauroids, chelonioids (sea turtles), ichthyosaurs and plesiosaurs. Plesiosaurian reptiles invaded pelagic ocean environments immediately following the Late Triassic extinctions. This diversification is recorded by three intensively-sampled European fossil faunas, spanning 20 million years (Ma). These provide an unparalleled opportunity to document changes in key macroevolutionary parameters associated with secondary adaptation to pelagic life in tetrapods. A comprehensive assessment focuses on the oldest fauna, from the Blue Lias Formation of Street, and nearby localities, in Somerset, UK (Earliest Jurassic: 200 Ma), identifying three new species representing two small-bodied rhomaleosaurids (Stratesaurus taylori gen et sp. nov.; Avalonnectes arturi gen. et sp. nov) and the most basal plesiosauroid, Eoplesiosaurus antiquior gen. et sp. nov. The initial radiation of plesiosaurs was characterised by high, but short-lived, diversity of an archaic clade, Rhomaleosauridae. Representatives of this initial radiation were replaced by derived, neoplesiosaurian plesiosaurs at small-medium body sizes during a more gradual accumulation of morphological disparity. This gradualistic modality suggests that adaptive radiations within tetrapod subclades are not always characterised by the initially high levels of disparity observed in the Paleozoic origins of major metazoan body plans, or in the origin of tetrapods. High rhomaleosaurid diversity immediately following the Triassic-Jurassic boundary supports the gradual model of Late Triassic extinctions, mostly predating the boundary itself. Increase in both maximum and minimum body length early in plesiosaurian history suggests a driven evolutionary trend. However, Maximum-likelihood models suggest only passive expansion into higher body size categories. PMID:22438869

  12. Static Grain Growth in Contact Metamorphic Calcite: A Cathodoluminescence Study.

    NASA Astrophysics Data System (ADS)

    Vogt, B.; Heilbronner, R.; Herwegh, M.; Ramseyer, K.

    2009-04-01

    In the Adamello contact aureole, monomineralic mesozoic limestones were investigated in terms of grain size evolution and compared to results on numerical modeling performed by Elle. The sampled area shows no deformation and therefore represents an appropriate natural laboratory for the study of static grain growth (Herwegh & Berger, 2003). For this purpose, samples were collected at different distances to the contact to the pluton, covering a temperature range between 270 to 630°C. In these marbles, the grain sizes increase with temperature from 5 µm to about 1 cm as one approaches the contact (Herwegh & Berger, 2003). In some samples, photomicrographs show domains of variable cathodoluminescence (CL) intensities, which are interpreted to represent growth zonations. Microstructures show grains that contain cores and in some samples even several growth stages. The cores are usually not centered and the zones not concentric. They may be in touch with grain boundaries. These zonation patterns are consistent within a given aggregate but differ among the samples even if they come from the same location. Relative CL intensities depend on the Mn/Fe ratio. We assume that changes in trace amounts of Mn/Fe must have occurred during the grain size evolution, preserving local geochemical trends and their variations with time. Changes in Mn/Fe ratios can either be explained by (a) locally derived fluids (e.g. hydration reactions of sheet silicate rich marbles in the vicinity) or (b) by the infiltration of the calcite aggregates by externally derived (magmatic?) fluids. At the present stage, we prefer a regional change in fluid composition (b) because the growth zonations only occur at distances of 750-1250 m from the pluton contact (350-450°C). Closer to the contact, neither zonations nor cores were found. At larger distances, CL intensities differ from grain to grain, revealing diagenetic CL patterns that were incompletely recrystallized by grain growth. The role of infiltration of magmatic fluids is also manifest in the vicinity of dikes, where intense zonation patterns are prominent in the marbles. The software Elle was developed to simulate microstructural evolution in rocks. The numerical model with the title "Grain boundary sweeping" was performed by M. Jessell and was found on http://www.materialsknowledge.org/elle. It displays the grain size evolution and the development of growth zonations during grain boundary migration of a 2D foam structure. This simulation was chosen because the driving force is the minimization of isotropic surface energies. It will be compared to the natural microstructures. At the last stage of the simulation the average grain and core sizes have increased. All, even the smallest grains, show growth zonations. Grains can be divided into two groups: (a) initially larger grains, increasing their grain size and maintaining their core size and (b) initially smaller grains with decreasing grain and decreasing core size. Group (a) grains show large areas swept by grain boundaries into the direction of small grains. Grain boundaries between large grains move more slowly. Their cores do not touch any grain boundaries. Cores of group (b) grains are in contact with the grain boundary network and are on the way to be consumed. In the numerical model and in the natural example similar features can be observed: The cores are not necessarily centered, the zonations are not necessarily concentric and some of the cores touch the grain boundary network. In the simulation, grain boundary migration velocity between large grains is smaller than between a large and a small grain. From this we would predict that - given enough time - a well sorted grain size distribution of increased grain size could be generated. But since many small grains occur we infer that this equilibrium has not been obtained. Analytical results of some natural samples that could be analyzed up to now indicate a relatively well sorted grain size distribution suggesting a more mature state of static grain growth. In comparison to the simulation, grain and core boundaries in the marbles are not always straight. For lobate grain boundaries the surface area has not been minimized in respect to the grain size. An explanation for this might be grain boundary pinning or a local dynamic overprint. Some cores and growth zones in the investigated calcites show a continuous change in luminescence. This is interpreted to be an effect of late diffusion within the grain and/or a continuous change of fluid composition and supply. The absence of zonation in samples close to the contact might be explained by fast grain growth due to high temperatures and/or fast fluid transport. Possibly, this is combined with an enhanced component of volume diffusion. Thus concentration variations of Mn/Fe are diminished and not visible in form of a growth zonation. Herwegh M, Berger A (2003) Differences in grain growth of calcite: a field-based modeling approach. Contr. Min. Pet. 145: 600-611

  13. Online purchases of an expanded range of condom sizes in comparison to current dimensional requirements allowable by US national standards.

    PubMed

    Cecil, Michael; Warner, Lee; Siegler, Aaron J

    2013-11-01

    Across studies, 35-50% of men describe condoms as fitting poorly. Rates of condom use may be inhibited in part due to the inaccessibility of appropriately sized condoms. As regulated medical devices, condom sizes conform to national standards such as those developed by the American Society for Testing and Materials (ASTM) or international standards such as those developed by the International Organisation for Standardisation (ISO). We describe the initial online sales experience of an expanded range of condom sizes and assess uptake in relation to the current required standard dimensions of condoms. Data regarding the initial 1000 sales of an expanded range of condom sizes in the United Kingdom were collected from late 2011 through to early 2012. Ninety-five condom sizes, comprising 14 lengths (83-238mm) and 12 widths (41-69mm), were available. For the first 1000 condom six-pack units that were sold, a total of 83 of the 95 unique sizes were purchased, including all 14 lengths and 12 widths, and both the smallest and largest condoms. Initial condom purchases were made by 572 individuals from 26 countries. Only 13.4% of consumer sales were in the ASTM's allowable range of sizes. These initial sales data suggest consumer interest in an expanded choice of condom sizes that fall outside the range currently allowable by national and international standards organisations.

  14. Nucleation and growth of electrodeposited Mn oxide rods for supercapacitor electrodes

    NASA Astrophysics Data System (ADS)

    Clark, Michael; Ivey, Douglas G.

    2015-09-01

    The nucleation and growth of electrodeposited Mn oxide rods has been investigated by preparing deposits on Au coated Si at varying deposition times between 0.5 s and 10 min. The deposits were investigated using high resolution scanning and transmission electron microscopy. A model for the nucleation and growth of Mn oxide rods has been proposed. Nucleation begins as thin sheets along Au grain boundaries and triple points. As these nucleation sites are consumed, nucleation spreads across the grains. Nucleation of sheets in close proximity causes agglomeration and the formation of rounded particles. Some of these rounded particles then accelerate in growth, initially in all directions and then primarily in the direction normal to the sample surface. Accelerated growth normal to the sample surface leads to the formation of rods. As rods grow, the growth of other particles accelerates and they become rods themselves. Eventually the entire sample surface is covered with rods 15-20 μm long and about 2 μm wide. The sheet-like morphology of the deposits is retained at all stages of deposition. Electron diffraction analysis of 3 s and 6 s deposits shows that the sheets are initially amorphous and then begin to crystallize into a cubic spinel Mn3O4 crystal structure. High resolution imaging of the 6 s sample shows small crystalline regions (˜5 nm in size) within an amorphous matrix.

  15. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  16. Exploring the effects of temperature and grain size on plumes associated with PDCs through analogue experimentation

    NASA Astrophysics Data System (ADS)

    Mitchell, S. J.; Eychenne, J.; Rust, A.

    2015-12-01

    Pyroclastic density currents (PDCs) often loft upwards into convective, buoyant co-PDC plumes. Recent analogue experiments using a unimodal grain size of 22 ± 6 μm (Andrews & Manga, 2012) have established that plume generation is aided by PDC interaction with a topographic barrier. Here, we have simulated the onset of co-PDC plumes from the collapse of concentrated particle-gas mixtures comprised of unimodal or bimodal grain size distributions (GSD) of glass beads, using combinations of lognormal populations with modes of 35, 195 and 590 μm. The collapse of a mixture, with constant mass 2950 ± 150 g, induced the propagation of a gravity current channelized down a 13° sloping tank; a barrier in the tank caused the gravity current to produce a plume of particles. Experiments were recorded with high speed visible and thermal-infrared cameras. Initial GSD and temperature of the mixture were varied to assess the effects of the addition of a coarser component on plume generation. Analogue co-PDC plumes were only produced when a proportion of fine grains (35 μm) was present in the initial granular mixture. Sampling of the particles entrained in the co-PDC plumes revealed that fine grains (35 μm) are preferentially lofted, although a few coarser particles (195 or 590 μm) are also entrained in the co-PDC plumes and settle closer to the area of uplift. Increasing the initial temperature of the mixture increases plume height measured at 1 and 2s after onset; this is supported by repeat experiments at specific conditions. Bimodal mixtures containing both fine (35 μm) and coarser (195 or 590 μm) grains result in plume heights and initial flow velocities higher than observed in unimodal fine-grained experiments of the same total mass of particles. Repeat experiments identify the natural variability in plume generation under the same nominal conditions, which is likely due to the combined variations of momentum during flow propagation and heat-driven buoyancy, as well as the homogeneity of the initial particle mixture.

  17. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  18. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations.

    PubMed

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-07

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  19. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations

    NASA Astrophysics Data System (ADS)

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-01

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  20. Early Implementation of the Class Size Reduction Initiative.

    ERIC Educational Resources Information Center

    Illig, David C.

    A survey of school districts was conducted to determine the initial progress and problems associated with the 1997 Class Size Reduction (CSR) Initiative. Data reveal that most school districts had enough space for smaller classes for at least two grade levels; small school districts were much less likely to report space constraints. The CSR did…

  1. Monitoring nekton as a bioindicator in shallow estuarine habitats

    USGS Publications Warehouse

    Raposa, K.B.; Roman, C.T.; Heltshe, J.F.

    2003-01-01

    Long-term monitoring of estuarine nekton has many practical and ecological benefits but efforts are hampered by a lack of standardized sampling procedures. This study provides a rationale for monitoring nekton in shallow (< 1 m), temperate, estuarine habitats and addresses some important issues that arise when developing monitoring protocols. Sampling in seagrass and salt marsh habitats is emphasized due to the susceptibility of each habitat to anthropogenic stress and to the abundant and rich nekton assemblages that each habitat supports. Extensive sampling with quantitative enclosure traps that estimate nekton density is suggested. These gears have a high capture efficiency in most habitats and are small enough (e.g., 1 m(2)) to permit sampling in specific microhabitats. Other aspects of nekton monitoring are discussed, including spatial and temporal sampling considerations, station selection, sample size estimation, and data collection and analysis. Developing and initiating long-term nekton monitoring programs will help evaluate natural and human-induced changes in estuarine nekton over time and advance our understanding of the interactions between nekton and the dynamic estuarine environment.

  2. Diffusion NMR methods applied to xenon gas for materials study

    NASA Technical Reports Server (NTRS)

    Mair, R. W.; Rosen, M. S.; Wang, R.; Cory, D. G.; Walsworth, R. L.

    2002-01-01

    We report initial NMR studies of (i) xenon gas diffusion in model heterogeneous porous media and (ii) continuous flow laser-polarized xenon gas. Both areas utilize the pulsed gradient spin-echo (PGSE) techniques in the gas phase, with the aim of obtaining more sophisticated information than just translational self-diffusion coefficients--a brief overview of this area is provided in the Introduction. The heterogeneous or multiple-length scale model porous media consisted of random packs of mixed glass beads of two different sizes. We focus on observing the approach of the time-dependent gas diffusion coefficient, D(t) (an indicator of mean squared displacement), to the long-time asymptote, with the aim of understanding the long-length scale structural information that may be derived from a heterogeneous porous system. We find that D(t) of imbibed xenon gas at short diffusion times is similar for the mixed bead pack and a pack of the smaller sized beads alone, hence reflecting the pore surface area to volume ratio of the smaller bead sample. The approach of D(t) to the long-time limit follows that of a pack of the larger sized beads alone, although the limiting D(t) for the mixed bead pack is lower, reflecting the lower porosity of the sample compared to that of a pack of mono-sized glass beads. The Pade approximation is used to interpolate D(t) data between the short- and long-time limits. Initial studies of continuous flow laser-polarized xenon gas demonstrate velocity-sensitive imaging of much higher flows than can generally be obtained with liquids (20-200 mm s-1). Gas velocity imaging is, however, found to be limited to a resolution of about 1 mm s-1 owing to the high diffusivity of gases compared with liquids. We also present the first gas-phase NMR scattering, or diffusive-diffraction, data, namely flow-enhanced structural features in the echo attenuation data from laser-polarized xenon flowing through a 2 mm glass bead pack. c2002 John Wiley & Sons, Ltd.

  3. Localized strain measurements of the intervertebral disc annulus during biaxial tensile testing.

    PubMed

    Karakolis, Thomas; Callaghan, Jack P

    2015-01-01

    Both inter-lamellar and intra-lamellar failures of the annulus have been described as potential modes of disc herniation. Attempts to characterize initial lamellar failure of the annulus have involved tensile testing of small tissue samples. The purpose of this study was to evaluate a method of measuring local surface strains through image analysis of a tensile test conducted on an isolated sample of annular tissue in order to enhance future studies of intervertebral disc failure. An annulus tissue sample was biaxial strained to 10%. High-resolution images captured the tissue surface throughout testing. Three test conditions were evaluated: submerged, non-submerged and marker. Surface strains were calculated for the two non-marker conditions based on motion of virtual tracking points. Tracking algorithm parameters (grid resolution and template size) were varied to determine the effect on estimated strains. Accuracy of point tracking was assessed through a comparison of the non-marker conditions to a condition involving markers placed on tissue surface. Grid resolution had a larger effect on local strain than template size. Average local strain error ranged from 3% to 9.25% and 0.1% to 2.0%, for the non-submerged and submerged conditions, respectively. Local strain estimation has a relatively high potential for error. Submerging the tissue provided superior strain estimates.

  4. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  5. A Theory of the von Weimarn Rules Governing the Average Size of Crystals Precipitated from a Supersaturated Solution

    NASA Technical Reports Server (NTRS)

    Barlow, Douglas A.; Baird, James K.; Su, Ching-Hua

    2003-01-01

    More than 75 years ago, von Weimarn summarized his observations of the dependence of the average crystal size on the initial relative concentration supersaturation prevailing in a solution from which crystals were growing. Since then, his empirically derived rules have become part of the lore of crystal growth. The first of these rules asserts that the average crystal size measured at the end of a crystallization increases as the initial value of the relative supersaturation decreases. The second rule states that for a given crystallization time, the average crystal size passes through a maximum as a function of the initial relative supersaturation. Using a theory of nucleation and growth due to Buyevich and Mansurov, we calculate the average crystal size as a function of the initial relative supersaturation. We confirm the von Weimarn rules for the case where the nucleation rate is proportional to the third power or higher of the relative supersaturation.

  6. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  7. Detection of early changes in lung-cell cytology by flow-systems analysis techniques. Progress report, January 1--June 30, 1976

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinkamp, J. A.; Hansen, K. M.; Wilson, J. S.

    1976-08-01

    This report summarizes results of preliminary experiments to develop cytological and biochemical indicators for estimating damage to respiratory epithelium exposed to toxic agents associated with the by-products of nonnuclear energy production using advanced flow-systems cell-analysis technologies. Since initiation of the program one year ago, progress has been made in obtaining adequate numbers of exfoliated lung cells from the Syrian hamster for flow analysis; cytological techniques developed on human exfoliated gynecological samples have been adapted to hamster lung epithelium for obtaining single-cell suspensions; and lung-cell samples have been initially characterized based on DNA content, total protein, nuclear and cytoplasmic size, andmore » multiangle light-scatter measurements. Preliminary results from measurements of the above parameters which recently became available are described in this report. As the flow-systems technology is adapted further to analysis of exfoliated lung cells, measurements of changes in physical and biochemical cellular properties as a function of exposure to toxic agents will be performed.« less

  8. Data assimilation method based on the constraints of confidence region

    NASA Astrophysics Data System (ADS)

    Li, Yong; Li, Siming; Sheng, Yao; Wang, Luheng

    2018-03-01

    The ensemble Kalman filter (EnKF) is a distinguished data assimilation method that is widely used and studied in various fields including methodology and oceanography. However, due to the limited sample size or imprecise dynamics model, it is usually easy for the forecast error variance to be underestimated, which further leads to the phenomenon of filter divergence. Additionally, the assimilation results of the initial stage are poor if the initial condition settings differ greatly from the true initial state. To address these problems, the variance inflation procedure is usually adopted. In this paper, we propose a new method based on the constraints of a confidence region constructed by the observations, called EnCR, to estimate the inflation parameter of the forecast error variance of the EnKF method. In the new method, the state estimate is more robust to both the inaccurate forecast models and initial condition settings. The new method is compared with other adaptive data assimilation methods in the Lorenz-63 and Lorenz-96 models under various model parameter settings. The simulation results show that the new method performs better than the competing methods.

  9. Multiscale Pore Throat Network Reconstruction of Tight Porous Media Constrained by Mercury Intrusion Capillary Pressure and Nuclear Magnetic Resonance Measurements

    NASA Astrophysics Data System (ADS)

    Xu, R.; Prodanovic, M.

    2017-12-01

    Due to the low porosity and permeability of tight porous media, hydrocarbon productivity strongly depends on the pore structure. Effective characterization of pore/throat sizes and reconstruction of their connectivity in tight porous media remains challenging. Having a representative pore throat network, however, is valuable for calculation of other petrophysical properties such as permeability, which is time-consuming and costly to obtain by experimental measurements. Due to a wide range of length scales encountered, a combination of experimental methods is usually required to obtain a comprehensive picture of the pore-body and pore-throat size distributions. In this work, we combine mercury intrusion capillary pressure (MICP) and nuclear magnetic resonance (NMR) measurements by percolation theory to derive pore-body size distribution, following the work by Daigle et al. (2015). However, in their work, the actual pore-throat sizes and the distribution of coordination numbers are not well-defined. To compensate for that, we build a 3D unstructured two-scale pore throat network model initialized by the measured porosity and the calculated pore-body size distributions, with a tunable pore-throat size and coordination number distribution, which we further determine by matching the capillary pressure vs. saturation curve from MICP measurement, based on the fact that the mercury intrusion process is controlled by both the pore/throat size distributions and the connectivity of the pore system. We validate our model by characterizing several core samples from tight Middle East carbonate, and use the network model to predict the apparent permeability of the samples under single phase fluid flow condition. Results show that the permeability we get is in reasonable agreement with the Coreval experimental measurements. The pore throat network we get can be used to further calculate relative permeability curves and simulate multiphase flow behavior, which will provide valuable insights into the production optimization and enhanced oil recovery design.

  10. Microwave-Assisted Synthesis of Silver Vanadium Phosphorus Oxide, Ag 2VO 2PO 4 : Crystallite Size Control and Impact on Electrochemistry

    DOE PAGES

    Huang, Jianping; Marschilok, Amy C.; Takeuchi, Esther S.; ...

    2016-03-07

    We study silver vanadium phosphorus oxide, Ag 2VO 2PO 4, that is a promising cathode material for Li batteries due in part to its large capacity and high current capability. Herein, a new synthesis of Ag 2VO 2PO 4 based on microwave heating is presented, where the reaction time is reduced by approximately 100× relative to other reported methods, and the crystallite size is controlled via synthesis temperature, showing a linear correlation of crystallite size with temperature. Notably, under galvanostatic reduction, the Ag 2VO 2PO 4 sample with the smallest crystallite size delivers the highest capacity and shows the highestmore » loaded voltage. Further, pulse discharge tests show a significant resistance decrease during the initial discharge coincident with the formation of Ag metal. Thus, the magnitude of the resistance decrease observed during pulse tests depends on the Ag 2VO 2PO 4 crystallite size, with the largest resistance decrease observed for the smallest crystallite size. Additional electrochemical measurements indicate a quasi-reversible redox reaction involving Li + insertion/deinsertion, with capacity fade due to structural changes associated with the discharge/charge process. In summary, this work demonstrates a faster synthetic approach for bimetallic polyanionic materials which also provides the opportunity for tuning of electrochemical properties through control of material physical properties such as crystallite size.« less

  11. An Analytical Model of Tribocharging in Regolith

    NASA Astrophysics Data System (ADS)

    Carter, D. P.; Hartzell, C. M.

    2015-12-01

    Nongravitational forces, including electrostatic forces and cohesion, can drive the behavior of regolith in low gravity environments such as the Moon and asteroids. Regolith is the 'skin' of solid planetary bodies: it is the outer coating that is observed by orbiters and the first material contacted by landers. Triboelectric charging, the phenomenon by which electrical charge accumulates during the collision or rubbing of two surfaces, has been found to occur in initially electrically neutral granular mixtures. Although charge transfer is often attributed to chemical differences between the different materials, charge separation has also been found to occur in mixtures containing grains of a single material, but with a variety of grain sizes. In such cases, the charge always separates according to grain size; typically the smaller grains acquire a more negative charge than the larger grains. Triboelectric charging may occur in a variety of planetary phenomena (including mass wasting and dust storms) as well as during spacecraft-surface interactions (including sample collection and wheel motion). Interactions between charged grains or with the solar wind plasma could produce regolith motion. However, a validated, predictive model of triboelectric charging between dielectric grains has not yet been developed. A model for such size-dependent charge separation will be presented, demonstrating how random collisions between initially electrically neutral grains lead to net migration of electrons toward the smaller grains. The model is applicable to a wide range of single-material granular mixtures, including those with unusual or wildly varying size distributions, and suggests a possible mechanism for the reversal of the usual size-dependent charge polarity described above. This is a significant improvement over existing charge exchange models, which are restricted to two discrete grains sizes and provide severely limited estimates for charge magnitude. We will also discuss the design of an experiment planned to test the charging estimates provided by the model presented and the potential implications for our understanding of regolith behavior.

  12. Retrieving optical constants of glasses with variable iron abundance

    NASA Astrophysics Data System (ADS)

    Carli, C.; Roush, T. L.; Capaccioni, F.; Baraldi, A.

    2013-12-01

    Visible and Near Infrared (VNIR, ~0.4-2.5 μm) spectroscopy is an important tool to explore the surface composition of objects in our Solar System. Using this technique different minerals have been recognized on the surfaces of solar system bodies. One of the principal products of extrusive volcanism and impact cratering is a glassy component, that can be abundant and thus significantly influence the spectral signature of the region investigated. Different types of glasses have been proposed and identified on the lunar surface and in star forming regions near young stellar objects. Here we report an initial effort of retrieving the optical constants of volcanic glasses formed in oxidizing terrestrial-like conditions. We also investigated how those calculations are affected by the grain size distribution. Bidirectional reflectance spectra, obtained with incidence and emission angles of 30° and 0°, respectively, were measured on powders of different grain sizes for four different glassy compositions in the VNIR. Hapke's model of the interaction of light with particulate surfaces was used to determine the imaginary index, k, at each wavelength by iteratively minimizing the difference between measured and calculated reflectance The basic approach to retrieving the optical constants was to use multiple grain sizes of the same sample and assume all grain sizes are compositionally equivalent. Unless independently known as a function of wavelength, an additional assumption must be made regarding the real index of refraction, n. The median size for each particle size separate was adopted for initially estimating k. Then, iterating the Hapke analysis results with a subtractive Kramers-Kronig analysis we were able to determine the wavelength dependence of n. For each composition we used the k-values estimated for all the grain sizes to calculate a mean k-value representing that composition. These values were then used to fit the original spectra by only varying the grain sizes. As a separate estimate of the k-values, we will use transmission measurements in the VNIR. Two slabs, with different thicknesses, will be measured for each composition. These data will be used to determine a k value and a comparison between k values obtained from the two different techniques will be discussed.

  13. Geomorphic field experiment to quantify grain size and biotic influence on riverbed sedimentation dynamics in a dry-season reservoir, Russian River, CA

    NASA Astrophysics Data System (ADS)

    Florsheim, J. L.; Ulrich, C.; Hubbard, S. S.; Borglin, S. E.; Rosenberry, D. O.

    2013-12-01

    An important problem in geomorphology is to differentiate between abiotic and biotic fine sediment deposition on coarse gravel river beds because of the potential for fine sediment to infiltrate and clog the pore space between gravel clasts. Infiltration of fines into gravel substrate is significant because it may reduce permeability; therefore, differentiation of abiotic vs. biotic sediment helps in understanding the causes of such changes. We conducted a geomorphic field experiment during May to November 2012 in the Russian River near Wohler, CA, to quantify biotic influence on riverbed sedimentation in a small temporary reservoir. The reservoir is formed upstream of a small dam inflated during the dry season to enhance water supply pumping from the aquifer below the channel; however, some flow is maintained in the reservoir to facilitate fish outmigration. In the Russian River field area, sediment transport dynamics during storm flows prior to dam inflation created an alternate bar-riffle complex with a coarser gravel surface layer over the relatively finer gravel subsurface. The objective of our work was to link grain size distribution and topographic variation to biotic and abiotic sediment deposition dynamics in this field setting where the summertime dam annually increases flow depth and inundates the bar surfaces. The field experiment investigated fine sediment deposition over the coarser surface sediment on two impounded bars upstream of the reservoir during an approximately five month period when the temporary dam was inflated. The approach included high resolution field surveys of topography, grain size sampling and sediment traps on channel bars, and laboratory analyses of grain size distributions and loss on ignition (LOI) to determine biotic content. Sediment traps were installed at six sites on bars to measure sediment deposited during the period of impoundment. Preliminary results show that fine sediment deposition occurred at all of the sample sites, and is spatially variable--likely influenced by topographic differences that moderate flow over the bars. Traps initially filled with coarse gravel from the bar's surface trapped more fine sediment than traps initially filled with material from the bar's subsurface sediment, suggesting that a gravel bar's armor layer may enhance the source of material available to infiltrate into the channel substrate. LOI analysis indicates that both surface and subsurface samples have organic content ranging between 2 and 4%, following winter storm flows prior to impoundment. In contrast, samples collected after the 5-month impoundment have higher organic content ranging between 5 and 11%. This work aids in differentiating between abiotic and biotic fine sediment deposition in order to understand their relative potential for clogging gravel substrate.

  14. Concentrations of selected constituents in surface-water and streambed-sediment samples collected from streams in and near an area of oil and natural-gas development, south-central Texas, 2011-13

    USGS Publications Warehouse

    Opsahl, Stephen P.; Crow, Cassi L.

    2014-01-01

    During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.

  15. The Superheat Phenomenon in the Combustion of Magnesium Particles

    NASA Technical Reports Server (NTRS)

    Shafirovich, E. IA.; Goldshleger, U. I.

    1992-01-01

    Magnesium is known to be a likely fuel for engines that could work in the CO2 atmospheres of Mars and Venus. The present paper reports temperature measurements of magnesium samples during combustion in CO2. The burning sample temperature increases with the decrease in the initial size. The temperature of the 1-mm samples is 300-400 K higher than the boiling point of magnesium. The stability of the superheated drop is explained by the presence of a porous shell on the surface. An attempt has been made to describe vaporization on the superheated drop by the Knudsen-Langmuir equation. During combustion at high-pressure fragment ejection of the flame is observed in high-speed motion pictures. This phenomenon is shown to be connected with the drop superheat. The repeated fracture of the outer shell formed in the flame ensures the complete burnout of metal particles at high pressure.

  16. Strategy to obtain axenic cultures from field-collected samples of the cyanobacterium Phormidium animalis.

    PubMed

    Vázquez-Martínez, Guadalupe; Rodriguez, Mario H; Hernández-Hernández, Fidel; Ibarra, Jorge E

    2004-04-01

    An efficient strategy, based on a combination of procedures, was developed to obtain axenic cultures from field-collected samples of the cyanobacterium Phormidium animalis. Samples were initially cultured in solid ASN-10 medium, and a crude separation of major contaminants from P. animalis filaments was achieved by washing in a series of centrifugations and resuspensions in liquid medium. Then, manageable filament fragments were obtained by probe sonication. Fragmentation was followed by forceful washing, using vacuum-driven filtration through an 8-microm pore size membrane and an excess of water. Washed fragments were cultured and treated with a sequential exposure to four different antibiotics. Finally, axenic cultures were obtained from serial dilutions of treated fragments. Monitoring under microscope examination and by inoculation in Luria-Bertani (LB) agar plates indicated either axenicity or the degree of contamination throughout the strategy.

  17. HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE

    NASA Technical Reports Server (NTRS)

    De, Salvo L. J.

    1994-01-01

    HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.

  18. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  19. Analysis of Sample Size, Counting Time, and Plot Size from an Avian Point Count Survey on Hoosier National Forest, Indiana

    Treesearch

    Frank R. Thompson; Monica J. Schwalbach

    1995-01-01

    We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...

  20. Pit initiation on nitinol in simulated physiological solutions.

    PubMed

    Pound, Bruce G

    2018-05-01

    Inclusions appear to play a crucial role in the initiation of pitting on nitinol, but the reason remains unclear. Furthermore, it has not been established whether the type of inclusion is a central factor. In this study, potentiodynamic polarization together with scanning electron microscopy and energy dispersive X-ray spectroscopy were used to provide more insight into the initiation of pits on electropolished nitinol wire. Corrosion was limited to a single primary pit on each of the few wire samples that exhibited breakdown. The pit contained numerous Ti 2 NiO x inclusions, but secondary pits that developed within the primary pit provided evidence that these inclusions were the sites of pit initiation. Although several theories have been proposed to account for pit initiation at inclusions in mechanically polished and electropolished nitinol, titanium depletion in the adjacent alloy matrix appears to provide the most viable explanation. The key factor appears to be the size of the inclusion and therefore the extent of titanium depletion in the alloy matrix. The type of inclusion evidently plays a secondary role at most. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 106B: 1605-1610, 2018. © 2017 Wiley Periodicals, Inc.

  1. What We Have Learned about Class Size Reduction in California. Capstone Report.

    ERIC Educational Resources Information Center

    Bohrnstedt, George W., Ed.; Stecher, Brian M., Ed.

    This final report on the California Class Size Reduction (CSR) initiative summarizes findings from three earlier reports dating back to 1997. Chapter 1 recaps the history of California's CSR initiative and includes a discussion of what state leaders' expectations were when CSR was passed. The chapter also describes research on class-size reduction…

  2. Price promotions for food and beverage products in a nationwide sample of food stores.

    PubMed

    Powell, Lisa M; Kumanyika, Shiriki K; Isgor, Zeynep; Rimkus, Leah; Zenk, Shannon N; Chaloupka, Frank J

    2016-05-01

    Food and beverage price promotions may be potential targets for public health initiatives but have not been well documented. We assessed prevalence and patterns of price promotions for food and beverage products in a nationwide sample of food stores by store type, product package size, and product healthfulness. We also assessed associations of price promotions with community characteristics and product prices. In-store data collected in 2010-2012 from 8959 food stores in 468 communities spanning 46 U.S. states were used. Differences in the prevalence of price promotions were tested across stores types, product varieties, and product package sizes. Multivariable regression analyses examined associations of presence of price promotions with community racial/ethnic and socioeconomic characteristics and with product prices. The prevalence of price promotions across all 44 products sampled was, on average, 13.4% in supermarkets (ranging from 9.1% for fresh fruits and vegetables to 18.2% for sugar-sweetened beverages), 4.5% in grocery stores (ranging from 2.5% for milk to 6.6% for breads and cereals), and 2.6% in limited service stores (ranging from 1.2% for fresh fruits and vegetables to 4.1% for breads and cereals). No differences were observed by community characteristics. Less-healthy versus more-healthy product varieties and larger versus smaller product package sizes generally had a higher prevalence of price promotion, particularly in supermarkets. On average, in supermarkets, price promotions were associated with 15.2% lower prices. The observed patterns of price promotions warrant more attention in public health food environment research and intervention. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Infant formula samples: perinatal sources and breast-feeding outcomes at 1 month postpartum.

    PubMed

    Thurston, Amanda; Bolin, Jocelyn H; Chezem, Jo Carol

    2013-01-01

    The purpose was to describe sources of infant formula samples during the perinatal period and assess their associations with breast-feeding outcomes at 1 month postpartum. Subjects included expectant mothers who anticipated breast-feeding at least 1 month. Infant feeding history and sources of formula samples were obtained at 1 month postpartum. Associations between sources and breast-feeding outcomes were assessed using partial correlation. Of the 61 subjects who initiated breast-feeding, most were white (87%), married (75%), college-educated (75%), and planned exclusive breast-feeding (82%). Forty-two subjects (69%) continued breast-feeding at 1 month postpartum. Subjects received formula samples from the hospital (n = 40; 66%), physician's office (n = 10; 16%), and mail (n = 41; 67%). There were no significant correlations between formula samples from the hospital, physician's office, and/or mail and any or exclusive breast-feeding at 1 month (P > .05). In addition to the hospital, a long-standing source of formula samples, mail was also frequently reported as a route for distribution. The lack of statistically significant associations between formula samples and any or exclusive breast-feeding at 1 month may be related to small sample size and unique characteristics of the group studied.

  4. Effects of biochar amendment on geotechnical properties of landfill cover soil.

    PubMed

    Reddy, Krishna R; Yaghoubi, Poupak; Yukselen-Aksoy, Yeliz

    2015-06-01

    Biochar is a carbon-rich product obtained when plant-based biomass is heated in a closed container with little or no available oxygen. Biochar-amended soil has the potential to serve as a landfill cover material that can oxidise methane emissions for two reasons: biochar amendment can increase the methane retention time and also enhance the biological activity that can promote the methanotrophic oxidation of methane. Hydraulic conductivity, compressibility and shear strength are the most important geotechnical properties that are required for the design of effective and stable landfill cover systems, but no studies have been reported on these properties for biochar-amended landfill cover soils. This article presents physicochemical and geotechnical properties of a biochar, a landfill cover soil and biochar-amended soils. Specifically, the effects of amending 5%, 10% and 20% biochar (of different particle sizes as produced, size-20 and size-40) to soil on its physicochemical properties, such as moisture content, organic content, specific gravity and pH, as well as geotechnical properties, such as hydraulic conductivity, compressibility and shear strength, were determined from laboratory testing. Soil or biochar samples were prepared by mixing them with 20% deionised water based on dry weight. Samples of soil amended with 5%, 10% and 20% biochar (w/w) as-is or of different select sizes, were also prepared at 20% initial moisture content. The results show that the hydraulic conductivity of the soil increases, compressibility of the soil decreases and shear strength of the soil increases with an increase in the biochar amendment, and with a decrease in biochar particle size. Overall, the study revealed that biochar-amended soils can possess excellent geotechnical properties to serve as stable landfill cover materials. © The Author(s) 2015.

  5. Synthesis carbon foams prepared from gelatin (CFG) for cadmium ion adsorption

    NASA Astrophysics Data System (ADS)

    Ulfa, M.; Ulfa, D. K.

    2018-01-01

    In this paper, carbon foam from gelatin (CFG) was synthesized by acid-catalyzed carbonization of gelatin solution on mild condition by the simple method. Gelatin (Ge) were used as sacrificial template and source of carbon. Sulphuric acid was used as acid catalyst. Carbon foam CFG sample were characterized by scanning electron microscope (SEM), nitrogen adsorption desorption and FTIR for knowing textural and structural properties of the sample. Carbon foam CFG sample demonstrated macro pipes-channel like with pore size that varies between 30-40 μ and surface area m 60-100 m2g-1. The carbon foams CFG sample were tested by using adsorption process for obtained their performance for decreasing Cd(II) ions from aqueous solutions. The adsorption capacities for cadmium was 46.7 mg/g obtained by using adsorbent dose 50 mg, initial concentration 50 ppm, contact time, 3 h; room temperature, stirring rate 150 rpm) which reached equilibrium at 55 min. Adsorption process fits using using Lagergren and Ho and McKay equation and measuring data

  6. A Bayesian model for estimating population means using a link-tracing sampling design.

    PubMed

    St Clair, Katherine; O'Connell, Daniel

    2012-03-01

    Link-tracing sampling designs can be used to study human populations that contain "hidden" groups who tend to be linked together by a common social trait. These links can be used to increase the sampling intensity of a hidden domain by tracing links from individuals selected in an initial wave of sampling to additional domain members. Chow and Thompson (2003, Survey Methodology 29, 197-205) derived a Bayesian model to estimate the size or proportion of individuals in the hidden population for certain link-tracing designs. We propose an addition to their model that will allow for the modeling of a quantitative response. We assess properties of our model using a constructed population and a real population of at-risk individuals, both of which contain two domains of hidden and nonhidden individuals. Our results show that our model can produce good point and interval estimates of the population mean and domain means when our population assumptions are satisfied. © 2011, The International Biometric Society.

  7. Outcomes of Nigeria's HIV/AIDS Treatment Program for Patients Initiated on Antiretroviral Treatment between 2004-2012

    PubMed Central

    Odafe, Solomon; Abiri, Oseni; Debem, Henry; Agolory, Simon; Shiraishi, Ray W.; Auld, Andrew F.; Swaminathan, Mahesh; Dokubo, Kainne; Ngige, Evelyn; Asadu, Chukwuemeka; Abatta, Emmanuel; Ellerbrock, Tedd V.

    2016-01-01

    Background The Nigerian Antiretroviral therapy (ART) program started in 2004 and now ranks among the largest in Africa. However, nationally representative data on outcomes have not been reported. Methods We evaluated retrospective cohort data from a nationally representative sample of adults aged ≥15 years who initiated ART during 2004 to 2012. Data were abstracted from 3,496 patient records at 35 sites selected using probability-proportional-to-size (PPS) sampling. Analyses were weighted and controlled for the complex survey design. The main outcome measures were mortality, loss to follow-up (LTFU), and retention (the proportion alive and on ART). Potential predictors of attrition were assessed using competing risk regression models. Results At ART initiation, 66.4 percent (%) were females, median age was 33 years, median weight 56 kg, median CD4 count 161 cells/mm3, and 47.1% had stage III/IV disease. The percentage of patients retained at 12, 24, 36 and 48 months was 81.2%, 74.4%, 67.2%, and 61.7%, respectively. Over 10,088 person-years of ART, mortality, LTFU, and overall attrition (mortality, LTFU, and treatment stop) rates were 1.1 (95% confidence interval (CI): 0.7–1.8), 12.3 (95%CI: 8.9–17.0), and 13.9 (95% CI: 10.4–18.5) per 100 person-years (py) respectively. Highest attrition rates of 55.4/100py were witnessed in the first 3 months on ART. Predictors of LTFU included: lower-than-secondary level education (reference: Tertiary), care in North-East and South-South regions (reference: North-Central), presence of moderate/severe anemia, symptomatic functional status, and baseline weight <45kg. Predictor of mortality was WHO stage higher than stage I. Male sex, severe anemia, and care in a small clinic were associated with both mortality and LTFU. Conclusion Moderate/Advanced HIV disease was predictive of attrition; earlier ART initiation could improve program outcomes. Retention interventions targeting men and those with lower levels of education are needed. Further research to understand geographic and clinic size variations with outcome is warranted. PMID:27829033

  8. A Plan for the Evaluation of California's Class Size Reduction Initiative.

    ERIC Educational Resources Information Center

    Kirst, Michael; Bomstedt, George; Stecher, Brian

    In July 1996, California began its Class Size Reduction (CSR) Initiative. To gauge the effectiveness of this initiative, an analysis of its objectives and an overview of proposed strategies for evaluating CSR are presented here. An outline of the major challenges that stand between CSR and its mission are provided. These include logistical…

  9. 7 CFR 51.1406 - Sample for grade or size determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...

  10. Characterization of ultra-fine grained aluminum produced by accumulative back extrusion (ABE)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alihosseini, H., E-mail: hamid.alihossieni@gmail.com; Materials Science and Engineering Department, Engineering School, Amirkabir University, Tehran; Faraji, G.

    2012-06-15

    In the present work, the microstructural evolutions and microhardness of AA1050 subjected to one, two and three passes of accumulative back extrusion (ABE) were investigated. The microstructural evolutions were characterized using transmission electron microscopy. The results revealed that applying three passes of accumulative back extrusion led to significant grain refinement. The initial grain size of 47 {mu}m was refined to the grains of 500 nm after three passes of ABE. Increasing the number of passes resulted in more decrease in grain size, better microstructure homogeneity and increase in the microhardness. The cross-section of ABEed specimen consisted of two different zones:more » (i) shear deformation zone, and (ii) normal deformation zone. The microhardness measurements indicated that the hardness increased from the initial value of 31 Hv to 67 Hv, verifying the significant microstructural refinement via accumulative back extrusion. - Highlights: Black-Right-Pointing-Pointer A significant grain refinement can be achieved in AA1050, Al alloy by applying ABE. Black-Right-Pointing-Pointer Microstructural homogeneity of ABEed samples increased by increasing the number of ABE cycles. Black-Right-Pointing-Pointer A substantial increase in the hardness, from 31 Hv to 67 Hv, was recorded.« less

  11. Investigation of the Iterative Phase Retrieval Algorithm for Interferometric Applications

    NASA Astrophysics Data System (ADS)

    Gombkötő, Balázs; Kornis, János

    2010-04-01

    Sequentially recorded intensity patterns reflected from a coherently illuminated diffuse object can be used to reconstruct the complex amplitude of the scattered beam. Several iterative phase retrieval algorithms are known in the literature to obtain the initially unknown phase from these longitudinally displaced intensity patterns. When two sequences are recorded in two different states of a centimeter sized object in optical setups that are similar to digital holographic interferometry-but omitting the reference wave-, displacement, deformation, or shape measurement is theoretically possible. To do this, the retrieved phase pattern should contain information not only about the intensities and locations of the point sources of the object surface, but their relative phase as well. Not only experiments require strict mechanical precision to record useful data, but even in simulations several parameters influence the capabilities of iterative phase retrieval, such as object to camera distance range, uniform or varying camera step sequence, speckle field characteristics, and sampling. Experiments were done to demonstrate this principle with an as large as 5×5 cm sized deformable object as well. Good initial results were obtained in an imaging setup, where the intensity pattern sequences were recorded near the image plane.

  12. The dynamic failure behavior of tungsten heavy alloys subjected to transverse loads

    NASA Astrophysics Data System (ADS)

    Tarcza, Kenneth Robert

    Tungsten heavy alloys (WHA), a category of particulate composites used in defense applications as kinetic energy penetrators, have been studied for many years. Even so, their dynamic failure behavior is not fully understood and cannot be predicted by numerical models presently in use. In this experimental investigation, a comprehensive understanding of the high-rate transverse-loading fracture behavior of WHA has been developed. Dynamic fracture events spanning a range of strain rates and loading conditions were created via mechanical testing and used to determine the influence of surface condition and microstructure on damage initiation, accumulation, and sample failure under different loading conditions. Using standard scanning electron microscopy metallographic and fractographic techniques, sample surface condition is shown to be extremely influential to the manner in which WHA fails, causing a fundamental change from externally to internally nucleated failures as surface condition is improved. Surface condition is characterized using electron microscopy and surface profilometry. Fracture surface analysis is conducted using electron microscopy, and linear elastic fracture mechanics is used to understand the influence of surface condition, specifically initial flaw size, on sample failure behavior. Loading conditions leading to failure are deduced from numerical modeling and experimental observation. The results highlight parameters and considerations critical to the understanding of dynamic WHA fracture and the development of dynamic WHA failure models.

  13. Modeling initiation trains based on HMX and TATB

    NASA Astrophysics Data System (ADS)

    Drake, R. C.; Maisey, M.

    2017-01-01

    There will always be a requirement to reduce the size of initiation trains. However, as the size is reduced the performance characteristics can be compromised. A detailed science-based understanding of the processes (ignition and growth to detonation) which determine the performance characteristics is required to enable compact and robust initiation trains to be designed. To assess the use of numerical models in the design of initiation trains a modeling study has been undertaken, with the aim of understanding the initiation of TATB and HMX charges by a confined, surface mounted detonator. The effect of detonator diameter and detonator confinement on the formation of dead zones in the acceptor explosives has been studied. The size of dead zones can be reduced by increasing the diameter of the detonator and by increasing the impedance of the confinement. The implications for the design of initiation trains are discussed.

  14. Growth of rutile TiO2 on the convex surface of nanocylinders: from nanoneedles to nanorods and their electrochemical properties

    NASA Astrophysics Data System (ADS)

    Kong, Junhua; Wei, Yuefan; Zhao, Chenyang; Toh, Meng Yew; Yee, Wu Aik; Zhou, Dan; Phua, Si Lei; Dong, Yuliang; Lu, Xuehong

    2014-03-01

    In this work, bundles of rutile TiO2 nanoneedles/nanorods are hydrothermally grown on carbon nanofibers (CNFs), forming free-standing mats consisting of three dimensional hierarchical nanostructures (TiO2-on-CNFs). Morphologies and structures of the TiO2-on-CNFs are studied using a field-emission scanning electron microscope (FESEM), transmission electron microscope (TEM), X-ray diffractometer (XRD) and thermogravimetric analyzer (TGA). Their electrochemical properties as electrodes in lithium ion batteries (LIBs) are investigated and correlated with the morphologies and structures. It is shown that the lateral size of the TiO2 nanoneedles/nanorods ranges from a few nanometers to tens of nanometers, and increases with the hydrothermal temperature. Small interspaces are observed between individual nanoneedles/nanorods, which are due to the diverging arrangement of nanoneedles/nanorods induced by growing on the convex surface of nanocylinders. It is found that the growth process can be divided into two stages: initial growth on the CNF surface and further growth upon re-nucleation on the TiO2 bundles formed in the initial growth stage. In order to achieve good electrochemical performance in LIBs, the size of the TiO2 nanostructures needs to be small enough to ensure complete alloying and fast charge transport, while the further growth stage has to be avoided to realize direct attachment of TiO2 nanostructures on the CNFs, facilitating electron transport. The sample obtained after hydrothermal treatment at 130 °C for 2 h (TiO2-130-2) shows the above features and hence exhibits the best cyclability and rate capacity among all samples; the cyclability and rate capacity of TiO2-130-2 are also superior to those of other rutile TiO2-based LIB electrodes.In this work, bundles of rutile TiO2 nanoneedles/nanorods are hydrothermally grown on carbon nanofibers (CNFs), forming free-standing mats consisting of three dimensional hierarchical nanostructures (TiO2-on-CNFs). Morphologies and structures of the TiO2-on-CNFs are studied using a field-emission scanning electron microscope (FESEM), transmission electron microscope (TEM), X-ray diffractometer (XRD) and thermogravimetric analyzer (TGA). Their electrochemical properties as electrodes in lithium ion batteries (LIBs) are investigated and correlated with the morphologies and structures. It is shown that the lateral size of the TiO2 nanoneedles/nanorods ranges from a few nanometers to tens of nanometers, and increases with the hydrothermal temperature. Small interspaces are observed between individual nanoneedles/nanorods, which are due to the diverging arrangement of nanoneedles/nanorods induced by growing on the convex surface of nanocylinders. It is found that the growth process can be divided into two stages: initial growth on the CNF surface and further growth upon re-nucleation on the TiO2 bundles formed in the initial growth stage. In order to achieve good electrochemical performance in LIBs, the size of the TiO2 nanostructures needs to be small enough to ensure complete alloying and fast charge transport, while the further growth stage has to be avoided to realize direct attachment of TiO2 nanostructures on the CNFs, facilitating electron transport. The sample obtained after hydrothermal treatment at 130 °C for 2 h (TiO2-130-2) shows the above features and hence exhibits the best cyclability and rate capacity among all samples; the cyclability and rate capacity of TiO2-130-2 are also superior to those of other rutile TiO2-based LIB electrodes. Electronic supplementary information (ESI) available: FESEM image of carbonized electrospinning-derived carbon nanofibers. FESEM images of TiO2 nanostructures grown on carbon nanofibers using titanium(iv) isopropoxide and titanium(iv) butoxide as precursors. TGA curves of the samples from 24 h hydrothermal growth at 90 °C, 130 °C and 180 °C. The cycling capacity of pure carbon nanofibers at a current rate of 50 mA g-1 and a voltage range of 1.0-2.8 V. The cycling capacity of the samples from 24 h hydrothermal growth at 90 °C, 130 °C and 180 °C. See DOI: 10.1039/c3nr04308h

  15. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    PubMed

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  16. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. ENHANCEMENT OF LEARNING ON SAMPLE SIZE CALCULATION WITH A SMARTPHONE APPLICATION: A CLUSTER-RANDOMIZED CONTROLLED TRIAL.

    PubMed

    Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz

    2017-01-01

    Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.

  18. Thermal conductivity measurements of particulate materials under Martian conditions

    NASA Technical Reports Server (NTRS)

    Presley, M. A.; Christensen, P. R.

    1993-01-01

    The mean particle diameter of surficial units on Mars has been approximated by applying thermal inertia determinations from the Mariner 9 Infrared Radiometer and the Viking Infrared Thermal Mapper data together with thermal conductivity measurement. Several studies have used this approximation to characterize surficial units and infer their nature and possible origin. Such interpretations are possible because previous measurements of the thermal conductivity of particulate materials have shown that particle size significantly affects thermal conductivity under martian atmospheric pressures. The transfer of thermal energy due to collisions of gas molecules is the predominant mechanism of thermal conductivity in porous systems for gas pressures above about 0.01 torr. At martian atmospheric pressures the mean free path of the gas molecules becomes greater than the effective distance over which conduction takes place between the particles. Gas particles are then more likely to collide with the solid particles than they are with each other. The average heat transfer distance between particles, which is related to particle size, shape and packing, thus determines how fast heat will flow through a particulate material.The derived one-to-one correspondence of thermal inertia to mean particle diameter implies a certain homogeneity in the materials analyzed. Yet the samples used were often characterized by fairly wide ranges of particle sizes with little information about the possible distribution of sizes within those ranges. Interpretation of thermal inertia data is further limited by the lack of data on other effects on the interparticle spacing relative to particle size, such as particle shape, bimodal or polymodal mixtures of grain sizes and formation of salt cements between grains. To address these limitations and to provide a more comprehensive set of thermal conductivities vs. particle size a linear heat source apparatus, similar to that of Cremers, was assembled to provide a means of measuring the thermal conductivity of particulate samples. In order to concentrate on the dependence of the thermal conductivity on particle size, initial runs will use spherical glass beads that are precision sieved into relatively small size ranges and thoroughly washed.

  19. Simulated space weathering of Fe- and Mg-rich aqueously altered minerals using pulsed laser irradiation

    NASA Astrophysics Data System (ADS)

    Kaluna, H. M.; Ishii, H. A.; Bradley, J. P.; Gillis-Davis, J. J.; Lucey, P. G.

    2017-08-01

    Simulated space weathering experiments on volatile-rich carbonaceous chondrites (CCs) have resulted in contrasting spectral behaviors (e.g. reddening vs bluing). The aim of this work is to investigate the origin of these contrasting trends by simulating space weathering on a subset of minerals found in these meteorites. We use pulsed laser irradiation to simulate micrometeorite impacts on aqueously altered minerals and observe their spectral and physical evolution as a function of irradiation time. Irradiation of the mineral lizardite, a Mg-phyllosilicate, produces a small degree of reddening and darkening, but a pronounced reduction in band depths with increasing irradiation. In comparison, irradiation of an Fe-rich aqueously altered mineral assemblage composed of cronstedtite, pyrite and siderite, produces significant darkening and band depth suppression. The spectral slopes of the Fe-rich assemblage initially redden then become bluer with increasing irradiation time. Post-irradiation analyses of the Fe-rich assemblage using scanning and transmission electron microscopy reveal the presence of micron sized carbon-rich particles that contain notable fractions of nitrogen and oxygen. Radiative transfer modeling of the Fe-rich assemblage suggests that nanometer sized metallic iron (npFe0) particles result in the initial spectral reddening of the samples, but the increasing production of micron sized carbon particles (μpC) results in the subsequent spectral bluing. The presence of npFe0 and the possible catalytic nature of cronstedtite, an Fe-rich phyllosilicate, likely promotes the synthesis of these carbon-rich, organic-like compounds. These experiments indicate that space weathering processes may enable organic synthesis reactions on the surfaces of volatile-rich asteroids. Furthermore, Mg-rich and Fe-rich aqueously altered minerals are dominant at different phases of the aqueous alteration process. Thus, the contrasting spectral slope evolution between the Fe- and Mg-rich samples in these experiments may indicate that space weathering trends of volatile-rich asteroids have a compositional dependency that could be used to determine the aqueous histories of asteroid parent bodies.

  20. Selected engineering properties and applications of EPS geofoam

    NASA Astrophysics Data System (ADS)

    Elragi, Ahmed Fouad

    Expanded polystyrene (EPS) geofoam is a lightweight material that has been used in engineering applications since at least the 1950s. Its density is about a hundredth of that of soil. It has good thermal insulation properties with stiffness and compression strength comparable to medium clay. It is utilized in reducing settlement below embankments, sound and vibration damping, reducing lateral pressure on substructures, reducing stresses on rigid buried conduits and related applications. This study starts with an overview on EPS geofoam. EPS manufacturing processes are described followed by a review of engineering properties found in previous research work done so far. Standards and design manuals applicable to EPS are presented. Selected EPS geofoam-engineering applications are discussed with examples. State-of-the-art of experimental work is done on different sizes of EPS specimens under different loading rates for better understanding of the behavior of the material. The effects of creep, sample size, strain rate and cyclic loading on the stress strain response are studied. Equations for the initial modulus and the strength of the material under compression for different strain rates are presented. The initial modulus and Poisson's ratio are discussed in detail. Sample size effect on creep behavior is examined. Three EPS projects are shown in this study. The creep behavior of the largest EPS geofoam embankment fill is shown. Results from laboratory tests, mathematical modeling and field records are compared to each other. Field records of a geofoam-stabilized slope are compared to finite difference analysis results. Lateral stress reduction on an EPS backfill retaining structure is analyzed. The study ends with a discussion on two promising properties of EPS geofoam. These are the damping ability and the compressibility of this material. Finite element analysis, finite difference analysis and lab results are included in this discussion. The discussion with the rest of the study points towards the main conclusion that EPS geofoam is the future material of promise in various civil engineering applications.

  1. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  2. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  3. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Population ecology of breeding Pacific common eiders on the Yukon-Kuskokwim Delta, Alaska

    USGS Publications Warehouse

    Wilson, Heather M.; Flint, Paul L.; Powell, Abby N.; Grand, J. Barry; Moral, Christine L.

    2012-01-01

    Populations of Pacific common eiders (Somateria mollissima v-nigrum) on the Yukon-Kuskokwim Delta (YKD) in western Alaska declined by 50–90% from 1957 to 1992 and then stabilized at reduced numbers from the early 1990s to the present. We investigated the underlying processes affecting their population dynamics by collection and analysis of demographic data from Pacific common eiders at 3 sites on the YKD (1991–2004) for 29 site-years. We examined variation in components of reproduction, tested hypotheses about the influence of specific ecological factors on life-history variables, and investigated their relative contributions to local population dynamics. Reproductive output was low and variable, both within and among individuals, whereas apparent survival of adult females was high and relatively invariant (0.89 ± 0.005). All reproductive parameters varied across study sites and years. Clutch initiation dates ranged from 4 May to 28 June, with peak (modal) initiation occurring on 26 May. Females at an island study site consistently initiated clutches 3–5 days earlier in each year than those on 2 mainland sites. Population variance in nest initiation date was negatively related to the peak, suggesting increased synchrony in years of delayed initiation. On average, total clutch size (laid) ranged from 4.8 to 6.6 eggs, and declined with date of nest initiation. After accounting for partial predation and non-viability of eggs, average clutch size at hatch ranged from 2.0 to 5.8 eggs. Within seasons, daily survival probability (DSP) of nests was lowest during egg-laying and late-initiation dates. Estimated nest survival varied considerably across sites and years (mean = 0.55, range: 0.06–0.92), but process variance in nest survival was relatively low (0.02, CI: 0.01–0.05), indicating that most variance was likely attributed to sampling error. We found evidence that observer effects may have reduced overall nest survival by 0.0–0.36 across site-years. Study sites with lower sample sizes and more frequent visitations appeared to experience greater observer effects. In general, Pacific common eiders exhibited high spatio-temporal variance in reproductive components. Larger clutch sizes and high nest survival at early initiation dates suggested directional selection favoring early nesting. However, stochastic environmental effects may have precluded response to this apparent selection pressure. Our results suggest that females breeding early in the season have the greatest reproductive value, as these birds lay the largest clutches and have the highest probability of successfully hatching. We developed stochastic, stage-based, matrix population models that incorporated observed spatio-temporal (process) variance and co-variation in vital rates, and projected the stable stage distribution () and population growth rate (λ). We used perturbation analyses to examine the relative influence of changes in vital rates on λ and variance decomposition to assess the proportion of variation in λ explained by process variation in each vital rate. In addition to matrix-based λ, we estimated λ using capture–recapture approaches, and log-linear regression. We found the stable age distribution for Pacific common eiders was weighted heavily towards experienced adult females (≥4 yr of age), and all calculations of λ indicated that the YKD population was stable to slightly increasing (λmatrix = 1.02, CI: 1.00–1.04); λreverse-capture–recapture = 1.05, CI: 0.99–1.11; λlog-linear = 1.04, CI: 0.98–1.10). Perturbation analyses suggested the population would respond most dramatically to changes in adult female survival (relative influence of adult survival was 1.5 times that of fecundity), whereas retrospective variation in λ was primarily explained by fecundity parameters (60%), particularly duckling survival (42%). Among components of fecundity, sensitivities were highest for duckling survival, suggesti

  5. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  6. Chemical consequences of the initial diffusional growth of cloud droplets - A clean marine case

    NASA Technical Reports Server (NTRS)

    Twohy, C. H.; Charlson, R. J.; Austin, P. H.

    1989-01-01

    A simple microphysical cloud parcel model and a simple representation of the background marine aerosol are used to predict the concentrations and compositions of droplets of various sizes near cloud base. The aerosol consists of an externally-mixed ammonium bisulfate accumulation mode and a sea-salt coarse particle mode. The difference in diffusional growth rates between the small and large droplets as well as the differences in composition between the two aerosol modes result in substantial differences in solute concentration and composition with size of droplets in the parcel. The chemistry of individual droplets is not, in general, representative of the bulk (volume-weighted mean) cloud water sample. These differences, calculated to occur early in the parcel's lifetime, should have important consequences for chemical reactions such as aqueous phase sulfate production.

  7. Determining suspended sediment particle size information from acoustical and optical backscatter measurements

    NASA Astrophysics Data System (ADS)

    Lynch, James F.; Irish, James D.; Sherwood, Christopher R.; Agrawal, Yogesh C.

    1994-08-01

    During the winter of 1990-1991 an Acoustic BackScatter System (ABSS), five Optical Backscatterance Sensors (OBSs) and a Laser In Situ Settling Tube (LISST) were deployed in 90 m of water off the California coast for 3 months as part of the Sediment Transport Events on Shelves and Slopes (STRESS) experiment. By looking at sediment transport events with both optical (OBS) and acoustic (ABSS) sensors, one obtains information about the size of the particles transported as well as their concentration. Specifically, we employ two different methods of estimating "average particle size". First, we use vertical scattering intensity profile slopes (acoustical and optical) to infer average particle size using a Rouse profile model of the boundary layer and a Stokes law fall velocity assumption. Secondly, we use a combination of optics and acoustics to form a multifrequency (two frequency) inverse for the average particle size. These results are compared to independent observations from the LISST instrument, which measures the particle size spectrum in situ using laser diffraction techniques. Rouse profile based inversions for particle size are found to be in good agreement with the LISST results except during periods of transport event initiation, when the Rouse profile is not expected to be valid. The two frequency inverse, which is boundary layer model independent, worked reasonably during all periods, with average particle sizes correlating well with the LISST estimates. In order to further corroborate the particle size inverses from the acoustical and optical instruments, we also examined size spectra obtained from in situ sediment grab samples and water column samples (suspended sediments), as well as laboratory tank experiments using STRESS sediments. Again, good agreement is noted. The laboratory tank experiment also allowed us to study the acoustical and optical scattering law characteristics of the STRESS sediments. It is seen that, for optics, using the cross sectional area of an equivalent sphere is a very good first approximation whereas for acoustics, which is most sensitive in the region ka ˜ 1, the particle volume itself is best sensed. In concluding, we briefly interpret the history of some STRESS transport events in light of the size distribution and other information available. For one of the events "anomalous" suspended particle size distributions are noted, i.e. larger particles are seen suspended before finer ones. Speculative hypotheses for why this signature is observed are presented.

  8. Sample size determination in group-sequential clinical trials with two co-primary endpoints

    PubMed Central

    Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

    2014-01-01

    We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

  9. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  10. Experimental Study of the Effect of the Initial Spectrum Width on the Statistics of Random Wave Groups

    NASA Astrophysics Data System (ADS)

    Shemer, L.; Sergeeva, A.

    2009-12-01

    The statistics of random water wave field determines the probability of appearance of extremely high (freak) waves. This probability is strongly related to the spectral wave field characteristics. Laboratory investigation of the spatial variation of the random wave-field statistics for various initial conditions is thus of substantial practical importance. Unidirectional nonlinear random wave groups are investigated experimentally in the 300 m long Large Wave Channel (GWK) in Hannover, Germany, which is the biggest facility of its kind in Europe. Numerous realizations of a wave field with the prescribed frequency power spectrum, yet randomly-distributed initial phases of each harmonic, were generated by a computer-controlled piston-type wavemaker. Several initial spectral shapes with identical dominant wave length but different width were considered. For each spectral shape, the total duration of sampling in all realizations was long enough to yield sufficient sample size for reliable statistics. Through all experiments, an effort had been made to retain the characteristic wave height value and thus the degree of nonlinearity of the wave field. Spatial evolution of numerous statistical wave field parameters (skewness, kurtosis and probability distributions) is studied using about 25 wave gauges distributed along the tank. It is found that, depending on the initial spectral shape, the frequency spectrum of the wave field may undergo significant modification in the course of its evolution along the tank; the values of all statistical wave parameters are strongly related to the local spectral width. A sample of the measured wave height probability functions (scaled by the variance of surface elevation) is plotted in Fig. 1 for the initially narrow rectangular spectrum. The results in Fig. 1 resemble findings obtained in [1] for the initial Gaussian spectral shape. The probability of large waves notably surpasses that predicted by the Rayleigh distribution and is the highest at the distance of about 100 m. Acknowledgement This study is carried out in the framework of the EC supported project "Transnational access to large-scale tests in the Large Wave Channel (GWK) of Forschungszentrum Küste (Contract HYDRALAB III - No. 022441). [1] L. Shemer and A. Sergeeva, J. Geophys. Res. Oceans 114, C01015 (2009). Figure 1. Variation along the tank of the measured wave height distribution for rectangular initial spectral shape, the carrier wave period T0=1.5 s.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir

    Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less

  12. Persistence and Bioavailability of DDT in a Coastal Salt Marsh

    NASA Astrophysics Data System (ADS)

    Rowlett, K.; Weathers, N.; Morrison, A.; White, H. K.

    2016-02-01

    DDT (dichlorodiphenyltrichloroethane) was a widely-used pesticide in the United States throughout the 1900s. In 1972, the EPA banned the use of DDT due to fears of severe bioaccumulation and toxicity in animals. However, the compound persists in measurable quantities in the environment, leading to questions surrounding its current bioavailability in key ecosystems such as coastal marshes. For this study a sediment core was collected in 2015 from a salt marsh in Dover, Delaware and the sediments and plant matter were analyzed for the presence of DDT and three of its main biological metabolites: DDD, DDE, and DDMU (collectively, DDX). Samples were extracted in toluene and analyzed for DDX via gas chromatography with mass spectrometry (GC/MS) operated in selected ion monitoring (SIM) mode. The initial down-core profile revealed that the maximum concentration of DDX in both plant matter (>1mm in size) and sediments (<250µm in size) was at 22-30cm below the marsh surface, corresponding to the time of DDT application, as determined by 210Pb-dating. After initial analysis of the concentration of DDX in the sediment core, a passive sampling method using low-density polyethylene (LDPE) was employed to measure the bioavailability of the DDX compounds in the collected sediments. Bioavailability experiments with LDPE are ongoing and results will be discussed. This study will contribute to our overall understanding of the persistence of DDT in the environment by further elucidating the association of DDX compounds with plants and sedimentary material as well as their bioavailability with respect to these associations.

  13. Is There a Disk of Satellites around the Milky Way?

    NASA Astrophysics Data System (ADS)

    Maji, Moupiya; Zhu, Qirong; Marinacci, Federico; Li, Yuexing

    2017-07-01

    The “disk of satellites” (DoS) around the Milky Way is a highly debated topic with conflicting interpretations of observations and their theoretical models. We perform a comprehensive analysis of all of the dwarfs detected in the Milky Way and find that the DoS structure depends strongly on the plane identification method and the sample size. In particular, we demonstrate that a small sample size can artificially produce a highly anisotropic spatial distribution and a strong clustering of the angular momentum of the satellites. Moreover, we calculate the evolution of the 11 classical satellites with proper motion measurements and find that the thin DoS in which they currently reside is transient. Furthermore, we analyze two cosmological simulations using the same initial conditions of a Milky-Way-sized galaxy, an N-body run with dark matter only, and a hydrodynamic one with both baryonic and dark matter, and find that the hydrodynamic simulation produces a more anisotropic distribution of satellites than the N-body one. Our results suggest that an anisotropic distribution of satellites in galaxies can originate from baryonic processes in the hierarchical structure formation model, but the claimed highly flattened, coherently rotating DoS of the Milky Way may be biased by the small-number selection effect. These findings may help resolve the contradictory claims of DoS in galaxies and the discrepancy among numerical simulations.

  14. Understanding improved osteoblast behavior on select nanoporous anodic alumina

    PubMed Central

    Ni, Siyu; Li, Changyan; Ni, Shirong; Chen, Ting; Webster, Thomas J

    2014-01-01

    The aim of this study was to prepare different sized porous anodic alumina (PAA) and examine preosteoblast (MC3T3-E1) attachment and proliferation on such nanoporous surfaces. In this study, PAA with tunable pore sizes (25 nm, 50 nm, and 75 nm) were fabricated by a two-step anodizing procedure in oxalic acid. The surface morphology and elemental composition of PAA were characterized by field emission scanning electron microscopy and X-ray photoelectron spectroscopy analysis. The nanopore arrays on all of the PAA samples were highly regular. X-ray photoelectron spectroscopy analysis suggested that the chemistry of PAA and flat aluminum surfaces were similar. However, contact angles were significantly greater on all of the PAA compared to flat aluminum substrates, which consequently altered protein adsorption profiles. The attachment and proliferation of preosteoblasts were determined for up to 7 days in culture using field emission scanning electron microscopy and a Cell Counting Kit-8. Results showed that nanoporous surfaces did not enhance initial preosteoblast attachment, whereas preosteoblast proliferation dramatically increased when the PAA pore size was either 50 nm or 75 nm compared to all other samples (P<0.05). Thus, this study showed that one can alter surface energy of aluminum by modifying surface nano-roughness alone (and not changing chemistry) through an anodization process to improve osteoblast density, and, thus, should be further studied as a bioactive interface for orthopedic applications. PMID:25045263

  15. Centrifugal Pump Effect on Average Particle Diameter of Oil-Water Emulsion

    NASA Astrophysics Data System (ADS)

    Morozova, A.; Eskin, A.

    2017-11-01

    In this paper we review the process of oil-water emulsion particles fragmentation in a turbulent flow created by a centrifugal pump. We examined the influence of time necessary for oil-water emulsion preparation on the particle size of oil products and the dependence of a centrifugal pump emulsifying capacity on the initial emulsion dispersion. The investigated emulsion contained the brand fuel oil M-100 and tap water; it was sprayed with a nozzle in a gas-water flare. After preparation of the emulsion, the centrifugal pump was turned on and the emulsion samples were taken before and after the pump passing in 15, 30 and 45 minutes of spraying. To determine the effect the centrifugal pump has on the dispersion of the oil-water emulsion, the mean particle diameter of the emulsion particles was determined by the optical and microscopic method before and after the pump passing. A dispersion analysis of the particles contained in the emulsion was carried out by a laser diffraction analyzer. By analyzing the pictures of the emulsion samples, it was determined that after the centrifugal pump operation a particle size of oil products decreases. This result is also confirmed by the distribution of the obtained analyzer where the content of fine particles with a diameter less than 10 μm increased from 12% to 23%. In case of increasing emulsion preparation time, a particle size of petroleum products also decreases.

  16. Is There a Disk of Satellites around the Milky Way?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maji, Moupiya; Zhu, Qirong; Li, Yuexing

    2017-07-01

    The “disk of satellites” (DoS) around the Milky Way is a highly debated topic with conflicting interpretations of observations and their theoretical models. We perform a comprehensive analysis of all of the dwarfs detected in the Milky Way and find that the DoS structure depends strongly on the plane identification method and the sample size. In particular, we demonstrate that a small sample size can artificially produce a highly anisotropic spatial distribution and a strong clustering of the angular momentum of the satellites. Moreover, we calculate the evolution of the 11 classical satellites with proper motion measurements and find thatmore » the thin DoS in which they currently reside is transient. Furthermore, we analyze two cosmological simulations using the same initial conditions of a Milky-Way-sized galaxy, an N -body run with dark matter only, and a hydrodynamic one with both baryonic and dark matter, and find that the hydrodynamic simulation produces a more anisotropic distribution of satellites than the N -body one. Our results suggest that an anisotropic distribution of satellites in galaxies can originate from baryonic processes in the hierarchical structure formation model, but the claimed highly flattened, coherently rotating DoS of the Milky Way may be biased by the small-number selection effect. These findings may help resolve the contradictory claims of DoS in galaxies and the discrepancy among numerical simulations.« less

  17. Effects of Calibration Sample Size and Item Bank Size on Ability Estimation in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Sahin, Alper; Weiss, David J.

    2015-01-01

    This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…

  18. Decadal climate prediction in the large ensemble limit

    NASA Astrophysics Data System (ADS)

    Yeager, S. G.; Rosenbloom, N. A.; Strand, G.; Lindsay, K. T.; Danabasoglu, G.; Karspeck, A. R.; Bates, S. C.; Meehl, G. A.

    2017-12-01

    In order to quantify the benefits of initialization for climate prediction on decadal timescales, two parallel sets of historical simulations are required: one "initialized" ensemble that incorporates observations of past climate states and one "uninitialized" ensemble whose internal climate variations evolve freely and without synchronicity. In the large ensemble limit, ensemble averaging isolates potentially predictable forced and internal variance components in the "initialized" set, but only the forced variance remains after averaging the "uninitialized" set. The ensemble size needed to achieve this variance decomposition, and to robustly distinguish initialized from uninitialized decadal predictions, remains poorly constrained. We examine a large ensemble (LE) of initialized decadal prediction (DP) experiments carried out using the Community Earth System Model (CESM). This 40-member CESM-DP-LE set of experiments represents the "initialized" complement to the CESM large ensemble of 20th century runs (CESM-LE) documented in Kay et al. (2015). Both simulation sets share the same model configuration, historical radiative forcings, and large ensemble sizes. The twin experiments afford an unprecedented opportunity to explore the sensitivity of DP skill assessment, and in particular the skill enhancement associated with initialization, to ensemble size. This talk will highlight the benefits of a large ensemble size for initialized predictions of seasonal climate over land in the Atlantic sector as well as predictions of shifts in the likelihood of climate extremes that have large societal impact.

  19. Assessing the Potential of Using Biochar as a Soil Conditioner

    NASA Astrophysics Data System (ADS)

    Glazunova, D. M.; Kuryntseva, P. A.; Selivanovskaya, S. Y.; Galitskaya, P. Y.

    2018-01-01

    Biochar is a product of pyrolysis of biomass such as plant tissues, manures, sewage sludge, organic fraction of municipal solid wastes etc. Nowadays, biochar is being discussed as an alternative fertilizer that improves the air and water balance of the soil and provides soil microbiota with slow releasing biogenic elements. Many factors such as initial substrate properties, pyrolysis temperature and regime may influence biochar characteristics. In this study, characteristics of the two biochars prepared from chicken manure (ChM) and sewage sludge (SS) at 550 °C were analyzed in order to reveal their agricultural potential. It was found, that the ChM biochar had a pH value of 5.80±0.21, which was 1.6 lower than the pH of the SS sample. The electrical conductivity of the ChM sample was 6 times higher than that of the SS sample, being 6.42±0.30 mS cm-1 and 1.02±0.10 mS·cm-1, respectively. The cation exchange capacity was estimated to be 7.6±0.26 and 45±0.14 cmol·kg-1 in the ChM and SS samples, respectively. In the ChM sample total organic carbon content was 24.93±3.2%, which is nearly twice as large as that in the SS sample (12.36±4.1%), whereas total nitrogen content was estimated to be 0.33±0.03% and 0.10±0.01% for ChM and SS samples, respectively. Using scanning electronic microscopy and laser particle size distribution analysis, it was shown that the SS sample was more homogeneous in its structure and consisted of particles having a lower size of 1 to 200μm with particles of 10 to 100μm being the most frequent, while the ChM sample was nonhomogeneous and its particle size varied between 2 and 2000 μm. To observe the influence on plants, 1% of biochar was added to soil, and wheat seeds were planted. The germination index estimated for soil treated by SS biochar was estimated to be 97%, while that of soil treated by ChM biochar was lower at about 78%.

  20. Sample size calculations for case-control studies

    Cancer.gov

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  1. Impaction grafted bone chip size effect on initial stability in an acetabular model: Mechanical evaluation.

    PubMed

    Holton, Colin; Bobak, Peter; Wilcox, Ruth; Jin, Zhongmin

    2013-01-01

    Acetabular bone defect reconstruction is an increasing problem for surgeons with patients undergoing complex primary or revision total hip replacement surgery. Impaction bone grafting is one technique that has favourable long-term clinical outcome results for patients who undergo this reconstruction method for acetabular bone defects. Creating initial mechanical stability of the impaction bone graft in this technique is known to be the key factor in achieving a favourable implant survival rate. Different sizes of bone chips were used in this technique to investigate if the size of bone chips used affected initial mechanical stability of a reconstructed acetabulum. Twenty acetabular models were created in total. Five control models were created with a cemented cup in a normal acetabulum. Then five models in three different groups of bone chip size were constructed. The three groups had an acetabular protrusion defect reconstructed using either; 2-4 mm(3), 10 mm(3) or 20 mm(3) bone chip size for impaction grafting reconstruction. The models underwent compression loading up to 9500 N and displacement within the acetabular model was measured indicating the initial mechanical stability. This study reveals that, although not statistically significant, the largest (20 mm(3)) bone chip size grafted models have an inferior maximum stiffness compared to the medium (10 mm(3)) bone chip size. Our study suggests that 10 mm(3) size of bone chips provide better initial mechanical stability compared to smaller or larger bone chips. We dismissed the previously held opinion that the biggest practically possible graft is best for acetabular bone graft impaction.

  2. System health monitoring using multiple-model adaptive estimation techniques

    NASA Astrophysics Data System (ADS)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.

  3. Neuropsychological tests for predicting cognitive decline in older adults

    PubMed Central

    Baerresen, Kimberly M; Miller, Karen J; Hanson, Eric R; Miller, Justin S; Dye, Richelin V; Hartman, Richard E; Vermeersch, David; Small, Gary W

    2015-01-01

    Summary Aim To determine neuropsychological tests likely to predict cognitive decline. Methods A sample of nonconverters (n = 106) was compared with those who declined in cognitive status (n = 24). Significant univariate logistic regression prediction models were used to create multivariate logistic regression models to predict decline based on initial neuropsychological testing. Results Rey–Osterrieth Complex Figure Test (RCFT) Retention predicted conversion to mild cognitive impairment (MCI) while baseline Buschke Delay predicted conversion to Alzheimer’s disease (AD). Due to group sample size differences, additional analyses were conducted using a subsample of demographically matched nonconverters. Analyses indicated RCFT Retention predicted conversion to MCI and AD, and Buschke Delay predicted conversion to AD. Conclusion Results suggest RCFT Retention and Buschke Delay may be useful in predicting cognitive decline. PMID:26107318

  4. Sol-gel synthesis of nanosized titanium dioxide at various pH of the initial solution

    NASA Astrophysics Data System (ADS)

    Dorosheva, I. B.; Valeeva, A. A.; Rempel, A. A.

    2017-09-01

    Titanium dioxide (TiO2) was synthesized by sol-gel method at different values of pH = 3, 7, 8, 9, or 10. X-ray phase analysis has shown that in an acid rout an anatase phase was crystallized, and in an alkaline rout an amorphous phase of TiO2 was achieved. After annealing for 4 hours at 350 °C, all samples was transformed in the anatase phase. The particle size in the different samples varies from 7 to 49 nm depending on the pH. The diffuse reflection spectra revealed a high value of the band gap in the range from 3.2 to 3.7 eV and its narrowing after annealing to the range from 3.2 to 3.5 eV.

  5. Low-cycle fatigue of Fe-20%Cr alloy processed by equal- channel angular pressing

    NASA Astrophysics Data System (ADS)

    Kaneko, Yoshihisa; Tomita, Ryuji; Vinogradov, Alexei

    2014-08-01

    Low-cycle fatigue properties were investigated on Fe-20%Cr ferritic stainless steel processed by equal channel angular pressing (ECAP). The Fe-20%Cr alloy bullets were processed for one to four passes via Route-Bc. The ECAPed samples were cyclically deformed at the constant plastic strain amplitude ɛpl of 5x10-4 at room temperature in air. After the 1-pass ECAP, low-angle grain boundaries were dominantly formed. During the low-cycle fatigue test, the 1-pass sample revealed the rapid softening which continued until fatigue fracture. Fatigue life of the 1-pass sample was shorter than that of a coarse-grained sample. After the 4-pass ECAP, the average grain size reduced down to about 1.5 μm. At initial stage of the low-cycle fatigue tests, the stress amplitude increased with increasing ECAP passes. At the samples processed for more than 2 passes, the cyclic softening was relatively moderate. It was found that fatigue life of the ECAPed Fe-20%Cr alloy excepting the 1-pass sample was improved as compared to the coarse-grained sample, even under the strain controlled fatigue condition.

  6. Sequential sampling: a novel method in farm animal welfare assessment.

    PubMed

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.

  7. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    PubMed

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  8. Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed

    NASA Astrophysics Data System (ADS)

    Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi

    2010-05-01

    To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.

  9. On the Link Between Emotionally Driven Impulsivity and Aggression: Evidence From a Validation Study on the Dutch UPPS-P.

    PubMed

    Bousardt, A M C; Noorthoorn, E O; Hoogendoorn, A W; Nijman, H L I; Hummelen, J W

    2018-06-01

    The UPPS-P seems to be a promising instrument for measuring different domains of impulsivity in forensic psychiatric patients. Validation studies of the instrument however, have been conducted only in student groups. In this validation study, three groups completed the Dutch UPPS-P: healthy student ( N = 94) and community ( N = 134) samples and a forensic psychiatric sample ( N = 73). The five-factor structure reported previously could only be substantiated in a confirmatory factor analysis over the combined groups but not in the subsamples. Subgroup sample sizes might be too small to allow such complex analyses. Internal consistency, as assessed by Cronbach's alpha, was high on most subscale and sample combinations. In explaining aggression, especially the initial subscale negative urgency (NU) was related to elevated scores on self-reported aggression in the healthy samples (student and community). The current study is the second study that found a relationship between self-reported NU and aggression highlighting the importance of addressing this behavioural domain in aggression management therapy.

  10. Innovative recruitment using online networks: lessons learned from an online study of alcohol and other drug use utilizing a web-based, respondent-driven sampling (webRDS) strategy.

    PubMed

    Bauermeister, José A; Zimmerman, Marc A; Johns, Michelle M; Glowacki, Pietreck; Stoddard, Sarah; Volz, Erik

    2012-09-01

    We used a web version of Respondent-Driven Sampling (webRDS) to recruit a sample of young adults (ages 18-24) and examined whether this strategy would result in alcohol and other drug (AOD) prevalence estimates comparable to national estimates (National Survey on Drug Use and Health [NSDUH]). We recruited 22 initial participants (seeds) via Facebook to complete a web survey examining AOD risk correlates. Sequential, incentivized recruitment continued until our desired sample size was achieved. After correcting for webRDS clustering effects, we contrasted our AOD prevalence estimates (past 30 days) to NSDUH estimates by comparing the 95% confidence intervals of prevalence estimates. We found comparable AOD prevalence estimates between our sample and NSDUH for the past 30 days for alcohol, marijuana, cocaine, Ecstasy (3,4-methylenedioxymethamphetamine, or MDMA), and hallucinogens. Cigarette use was lower than NSDUH estimates. WebRDS may be a suitable strategy to recruit young adults online. We discuss the unique strengths and challenges that may be encountered by public health researchers using webRDS methods.

  11. Effect of local void morphology on the reaction initiation mechanism in the case of pressed HMX

    NASA Astrophysics Data System (ADS)

    Roy, Sidhartha; Rai, Nirmal; Udaykumar, H. S.

    2017-06-01

    The microstructural characteristics of pressed HMX has a significant effect on its sensitivity under shock loading. The microstructure of pressed HMX contains voids of various orientation and aspect ratio. Subject to shock loading, these voids can collapse forming hotspots and initiate chemical reaction. This work shows how the ignition and growth of chemical reaction is dependent on the local microstructural features of the voids. Morphological quantities like size, aspect ratio and orientations are extracted from the real microstructural images of Class III and Class V pressed HMX. These morphological quantities are correlated with the ignition and growth rates of the chemical reaction. The dependency of the sensitivity of a given HMX sample on the local morphological features shows that these local features can create a mocroscale physical response.

  12. Effect of Pore Clogging on Kinetics of Lead Uptake by Clinoptilolite.

    PubMed

    Inglezakis; Diamandis; Loizidou; Grigoropoulou

    1999-07-01

    The kinetics of lead-sodium ion exchange using pretreated natural clinoptilolite are investigated, more specifically the influence of agitation (0, 210, and 650 rpm) on the limiting step of the overall process, for particle sizes of 0.63-0.8 and 0.8-1 mm at ambient temperature and initial lead solutions of 500 mg l-1 without pH adjustment. The isotopic exchange model is found to fit the ion exchange process. Particle diffusion is shown to be the controlling step for both particle sizes under agitation, while in the absence of agitation film diffusion is shown to control. The ion exchange process effective diffusion coefficients are calculated and found to depend strongly on particle size in the case of agitation at 210 rpm and only slightly on particle size at 650 rpm. Lead uptake rates are higher for smaller particles only at rigorous agitation, while at mild agitation the results are reversed. These facts are due to partial clogging of the pores of the mineral during the grinding process. This is verified through comparison of lead uptake rates for two samples of the same particle size, one of which is rigorously washed for a certain time before being exposed to the ion exchange. Copyright 1999 Academic Press.

  13. Monthly ENSO Forecast Skill and Lagged Ensemble Size

    PubMed Central

    DelSole, T.; Tippett, M.K.; Pegion, K.

    2018-01-01

    Abstract The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real‐time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real‐time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8–10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities. PMID:29937973

  14. Linking Initial Microstructure to ORR Related Property Degradation in SOFC Cathode: A Phase Field Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Y.; Cheng, T. -L.; Wen, Y. H.

    Microstructure evolution driven by thermal coarsening is an important factor for the loss of oxygen reduction reaction rates in SOFC cathode. In this work, the effect of an initial microstructure on the microstructure evolution in SOFC cathode is investigated using a recently developed phase field model. Specifically, we tune the phase fraction, the average grain size, the standard deviation of the grain size and the grain shape in the initial microstructure, and explore their effect on the evolution of the grain size, the density of triple phase boundary, the specific surface area and the effective conductivity in LSM-YSZ cathodes. Itmore » is found that the degradation rate of TPB density and SSA of LSM is lower with less LSM phase fraction (with constant porosity assumed) and greater average grain size, while the degradation rate of effective conductivity can also be tuned by adjusting the standard deviation of grain size distribution and grain aspect ratio. The implication of this study on the designing of an optimal initial microstructure of SOFC cathodes is discussed.« less

  15. Linking Initial Microstructure to ORR Related Property Degradation in SOFC Cathode: A Phase Field Simulation

    DOE PAGES

    Lei, Y.; Cheng, T. -L.; Wen, Y. H.

    2017-07-05

    Microstructure evolution driven by thermal coarsening is an important factor for the loss of oxygen reduction reaction rates in SOFC cathode. In this work, the effect of an initial microstructure on the microstructure evolution in SOFC cathode is investigated using a recently developed phase field model. Specifically, we tune the phase fraction, the average grain size, the standard deviation of the grain size and the grain shape in the initial microstructure, and explore their effect on the evolution of the grain size, the density of triple phase boundary, the specific surface area and the effective conductivity in LSM-YSZ cathodes. Itmore » is found that the degradation rate of TPB density and SSA of LSM is lower with less LSM phase fraction (with constant porosity assumed) and greater average grain size, while the degradation rate of effective conductivity can also be tuned by adjusting the standard deviation of grain size distribution and grain aspect ratio. The implication of this study on the designing of an optimal initial microstructure of SOFC cathodes is discussed.« less

  16. Monthly ENSO Forecast Skill and Lagged Ensemble Size

    NASA Astrophysics Data System (ADS)

    Trenary, L.; DelSole, T.; Tippett, M. K.; Pegion, K.

    2018-04-01

    The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real-time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real-time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8-10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities.

  17. Y3Fe5O12 nanoparticulate garnet ferrites: Comprehensive study on the synthesis and characterization fabricated by various routes

    NASA Astrophysics Data System (ADS)

    Niaz Akhtar, Majid; Azhar Khan, Muhammad; Ahmad, Mukhtar; Murtaza, G.; Raza, Rizwan; Shaukat, S. F.; Asif, M. H.; Nasir, Nadeem; Abbas, Ghazanfar; Nazir, M. S.; Raza, M. R.

    2014-11-01

    The effects of synthesis methods such as sol-gel (SG), self combustion (SC) and modified conventional mixed oxide (MCMO) on the structure, morphology and magnetic properties of the (Y3Fe5O12) garnet ferrites have been studied in the present work. The samples of Y3Fe5O12 were sintered at 950 °C and 1150 °C (by SG and SC methods). For MCMO route the sintering was done at 1350 °C for 6 h. Synthesized samples prepared by various routes were investigated using X-ray diffraction (XRD) analysis, Field emission scanning electron microscopy (FESEM), Impedance network analyzer and transmission electron microscopy (TEM). The structural analysis reveals that the samples are of single phase structure and shows variations in the particle sizes and cells volumes, prepared by various routes. FESEM and TEM images depict that grain size increases with the increase of sintering temperature from 40 nm to 100 nm.Magnetic measurements reveal that garnet ferrite synthesized by sol gel method has high initial permeability (60.22) and low magnetic loss (0.0004) as compared to other garnet ferrite samples, which were synthesized by self combustion and MCMO methods. The M-H loops exhibit very low coercivity which enables the use of these materials in relays and switching devices fabrications. Thus, the garnet nanoferrites with low magnetic loss prepared by different methods may open new horizon for electronic industry for their use in high frequency applications.

  18. Sample size and power calculations for detecting changes in malaria transmission using antibody seroconversion rate.

    PubMed

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris

    2015-12-30

    Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.

  19. Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology

    PubMed Central

    Vavrek, Matthew J.

    2015-01-01

    Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770

  20. [The preparation and characterization of fine dusts carried out in the Clinica del Lavoro di Milano in support of experimental studies].

    PubMed

    Occella, E; Maddalon, G; Peruzzo, G F; Foà, V

    1999-01-01

    This paper aims to illustrate the conditions selected at the Clinica del Lavoro of the University of Milan to prepare and analyze a large number of fine dust samples produced over a period of about 50 years, that were initially used for studies within the Clinic performed in its own facilities, and since 1956 were sent to other Italian and overseas laboratories (Luxembourg, UK, Germany, Norway, Sweden, South Korea, USA). The total quantity of material distributed (with maximum size 7-10 microns) was about 2 kg and consisted of the following mineral and artificial compounds: quartz, HF-treated quartz, tridymite, HF-treated tridymite, cristobalite, chromite, anthracite, quartz sand for foundry moulds, sand from the Lybian desert, vitreous silica, pumice, cement, as well small quantities of metallic oxides, organic resins, chrysotile, crocidolite, fibres (vitreous, cotton and polyamidic). About half of the entire quantity of dusts produced consisted of partially HF-treated tridymite. Initially, research on the etiology of silicosis used quartz dust samples, simply sieved or ventilated (consisting of classes finer than 0.04 mm, containing a 15-20% respirable fraction). From 1956 to 1960 the dusts were produced by manual grinding in an agate mortar, below about 10 microns, starting from quartz from Quincinetto (near Ivrea, Province of Turin), containing about 99.5% quartz: particle size and composition were checked using an optical-petrographic technique, with identification of the free and total silica content. Subsequently, the dusts used for biological research were obtained by grinding coarse material with a cast iron pestle and planetary mills, agate and corundum jars. The grinding products were sized by means of centrifugal classification, using the selector developed by N. Zurlo, ensuring control of dust size both optically and by means of wet levigators and hydraulic classifiers (in cooperation with the Institute of Mines of Turin Polytechnic School). After 1990 pestles and rotating drum mills with autogenic grinding load were used for grinding: the size of the treated samples was reduced to 0.05 mm and an extremely fine fraction was extracted, smaller than 7-10 microns, which was used for pneumoconioses research. The characterization of the dust produced was in any case achieved by means of preliminary examination under the optical microscope (polarized light, sometimes supplemented with phase contrast), followed by quantitative analysis using chemical/petrographic, chemical diffraction or, more commonly, petrographic/diffraction techniques. Microscopic examination, if necessary supplemented with photo-micrography, was also used for particle size control, for numerical counting and subsequent reference to weight proportion. For all operational procedures the essential data on instruments and methods are reported. During studies on production, separation of fine dusts and their characterization, partly performed with support from the European Community (EEC/European Coal and Steel Commission), the following topics in particular were addressed: connections between particle size and free silica content in the measurable dust size fraction of the grinding products and in airborne dusts; characteristics of the dusts and risk indices in Italian iron and pyrite mines; possibility of abatement of the ultrafine classes of airborne dusts in pneumatically filled stopes by the addition of salts; comparison of the latest dust selectors used within the European Community; influence of the grinding methods on the results of fibrous and soft mineral measurement using X-ray diffraction analysis.

  1. Initial rupture of earthquakes in the 1995 Ridgecrest, California sequence

    USGS Publications Warehouse

    Mori, J.; Kanamori, H.

    1996-01-01

    Close examination of the P waves from earthquakes ranging in size across several orders of magnitude shows that the shape of the initiation of the velocity waveforms is independent of the magnitude of the earthquake. A model in which earthquakes of all sizes have similar rupture initiation can explain the data. This suggests that it is difficult to estimate the eventual size of an earthquake from the initial portion of the waveform. Previously reported curvature seen in the beginning of some velocity waveforms can be largely explained as the effect of anelastic attenuation; thus there is little evidence for a departure from models of simple rupture initiation that grow dynamically from a small region. The results of this study indicate that any "precursory" radiation at seismic frequencies must emanate from a source region no larger than the equivalent of a M0.5 event (i.e. a characteristic length of ???10 m). The size of the nucleation region for magnitude 0 to 5 earthquakes thus is not resolvable with the standard seismic instrumentation deployed in California. Copyright 1996 by the American Geophysical Union.

  2. Experimental investigation of infiltration in soil with occurrence of preferential flow and air trapping

    NASA Astrophysics Data System (ADS)

    Snehota, Michal; Jelinkova, Vladimira; Sacha, Jan; Cislerova, Milena

    2015-04-01

    Recently, a number of infiltration experiments have not proved the validity of standard Richards' theory of the flow in soils with wide pore size distribution. Water flow in such soils under near-saturated conditions often exhibits preferential flow and temporal instability of the saturated hydraulic conductivity. An intact sample of coarse sandy loam from Cambisol series containing naturally developed vertically connected macropore was investigated during recurrent ponding infiltration (RPI) experiments conducted during period of 30 hours. RPI experiment consisted of two ponded infiltration runs, each followed by free gravitational draining of the sample. Three-dimensional neutron tomography (NT) image of the dry sample was acquired before the infiltration begun. The dynamics of the wetting front advancement was investigated by a sequence of neutron radiography (NR) images. Analysis of NR showed that water front moved preferentially through the macropore at the approximate speed of 2 mm/sec, which was significantly faster pace than the 0.3 mm/sec wetting advancement in the surrounding soil matrix. After the water started to flow out of the sample, changes in the local water content distribution were evaluated quantitatively by subtracting the NT image of the dry sample from subsequent tomography images. As a next stage, the experiment was repeated on a composed sample packed of ceramic and coarse sand. Series of infiltration runs was conducted in the sample with different initial water contents. The neutron tomography data quantitatively showed that both in natural soil sample containing the macropore and in the composed sample air was gradually transported from the region of fine soil matrix to the macropores or to the coarser material. The accumulation of the air bubbles in the large pores affected the hydraulic conductivity of the sample reducing it up to 50% of the initial value. This supports the hypothesis on strong influence of entrapped air amount and spatial distribution on infiltration into heterogeneous soils. The research was supported by the Czech Science Foundation Project No. 14-03691S.

  3. Correlation between structure and compressive strength in a reticulated glass-reinforced hydroxyapatite foam.

    PubMed

    Callcut, S; Knowles, J C

    2002-05-01

    Glass-reinforced hydroxyapatite (HA) foams were produced using reticulated foam technology using a polyurethane template with two different pore size distributions. The mechanical properties were evaluated and the structure analyzed through density measurements, image analysis, X-ray diffraction (XRD) and scanning electron microscopy (SEM). For the mechanical properties, the use of a glass significantly improved the ultimate compressive strength (UCS) as did the use of a second coating. All the samples tested showed the classic three regions characteristic of an elastic brittle foam. From the density measurements, after application of a correction to compensate for the closed porosity, the bulk and apparent density showed a 1 : 1 correlation. When relative bulk density was plotted against UCS, a non-linear relationship was found characteristic of an isotropic open celled material. It was found by image analysis that the pore size distribution did not change and there was no degradation of the macrostructure when replicating the ceramic from the initial polyurethane template during processing. However, the pore size distributions did shift to a lower size by about 0.5 mm due to the firing process. The ceramic foams were found to exhibit mechanical properties typical of isotropic open cellular foams.

  4. Role of Pb for Ag growth on H-passivated Si(1 0 0) surfaces

    NASA Astrophysics Data System (ADS)

    Mathew, S.; Satpati, B.; Joseph, B.; Dev, B. N.

    2005-08-01

    We have deposited Ag on hydrogen passivated Si(1 0 0) surfaces under high vacuum conditions at room temperature. The deposition, followed by annealing at 250 °C for 30 min, produced silver islands of an average lateral size 36±14 nm. Depositing a small amount of Pb prior to Ag deposition reduced the average island size to 14±5 nm. A small amount of Pb, initially present at the Ag-Si interface, is found to be segregating to the surface of Ag after annealing. Both these aspects, namely, reduction of the island size and Pb floating on the Ag surface conform to the surfactant action of Pb. Samples have been characterized by transmission electron microscopy (TEM) and Rutherford backscattering spectroscopy (RBS). A selective etching process that preferentially removes Pb, in conjunction with RBS, was used to detect surface segregation of Pb involving depth scales below the resolution of conventional RBS. The annealing and etching process leaves only smaller Ag islands on the surface with complete removal of Pb. Ag growth in the presence of Pb leads to smaller Ag islands with a narrower size distribution.

  5. Compression-induced stacking fault tetrahedra around He bubbles in Al

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Jian-Li, E-mail: shao-jianli@iapcm.ac.cn; Wang, Pei; He, An-Min

    Classic molecular dynamics methods are used to simulate the uniform compression process of the fcc Al containing He bubbles. The formation of stacking fault tetrahedra (SFTs) during the collapse of He bubbles is found, and their dependence on the initial He bubble size (0.6–6 nm in diameter) is presented. Our simulations indicate only elastic deformation in the samples for the He bubble size not more than 2 nm. Instead, increasing the He bubble size, we detect several small SFTs forming on the surface of the He bubble (3 nm), as well as the two intercrossed SFTs around the He bubbles (4–6 nm). All thesemore » SFTs are observed to be stable under further compression, though there may appear some SF networks outside the SFTs (5–6 nm). Furthermore, the dynamic analysis on the SFTs shows that the yield pressure keeps a near-linear increase with the initial He bubble pressure, and the potential energy of Al atoms inside the SFTs is lower than outside because of their gliding inwards. In addition, the pressure increments of 2–6 nm He bubbles with strain are less than that of Al, which just provides the opportunity for the He bubble collapse and the SFTs formation. Note that the current work only focuses on the case that the number ratio between He atoms and Al vacancies is 1:1.« less

  6. Probing the end of reionization with the near zones of z ≳ 6 QSOs

    NASA Astrophysics Data System (ADS)

    Keating, Laura C.; Haehnelt, Martin G.; Cantalupo, Sebastiano; Puchwein, Ewald

    2015-11-01

    QSO near zones are an important probe of the ionization state of the intergalactic medium (IGM) at z ˜ 6-7, at the end of reionization. We present here high-resolution cosmological 3D radiative transfer simulations of QSO environments for a wide range of host halo masses, 1010-12.5 M⊙. Our simulated near zones reproduce both the overall decrease of observed near-zone sizes at 6 < z < 7 and their scatter. The observable near-zone properties in our simulations depend only very weakly on the mass of the host halo. The size of the H II region expanding into the IGM is generally limited by (super-)Lyman Limit systems loosely associated with (low-mass) dark matter haloes. This leads to a strong dependence of near-zone size on direction and drives the large observed scatter. In the simulation centred on our most massive host halo, many sightlines show strong red damping wings even for initial volume averaged neutral hydrogen fractions as low as ˜10-3. For QSO lifetimes long enough to allow growth of the central supermassive black hole while optically bright, we can reproduce the observed near zone of ULAS J1120+0641 only with an IGM that is initially neutral. Our results suggest that larger samples of z > 7 QSOs will provide important constraints on the evolution of the neutral hydrogen fraction and thus on how late reionization ends.

  7. Biostatistics Series Module 5: Determining Sample Size

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437

  8. Sample size and power for cost-effectiveness analysis (part 1).

    PubMed

    Glick, Henry A

    2011-03-01

    Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.

  9. Estimation of sample size and testing power (Part 4).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  10. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    PubMed

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  11. Effects of cadmium on growth, metamorphosis and gonadal sex differentiation in tadpoles of the African clawed frog, Xenopus laevis

    USGS Publications Warehouse

    Sharma, Bibek; Patino, R.

    2009-01-01

    Xenopus laevis larvae were exposed to cadmium (Cd) at 0, 1, 8, 85 or 860 ??g L-1 in FETAX medium from 0 to 86 d postfertilization. Premetamorphic tadpoles were sampled on day 31; pre and prometamorphic tadpoles on day 49; and frogs (NF stage 66) between days 50 and 86. Survival, snout-vent length (SVL), tail length, total length, hindlimb length (HLL), initiation of metamorphic climax, size at and completion of metamorphosis, and gonadal condition and sex ratio (assessed histologically) were determined. Survival was unaffected by Cd until day 49, but increased mortality was observed after day 49 at 860 ??g Cd L-1. On day 31, when tadpoles were in early premetamorphosis, inhibitory effects on tadpole growth were observed only at 860 ??g Cd L-1. On day 49, when most tadpoles where in late premetamorphosis/early prometamorphosis, reductions in SVL, HLL and total length were observed at 8 and 860 but not 85 ??g L-1, thus creating a U-shaped size distribution at 0-85 ??g Cd L-1. However, this U-shaped size pattern was not evident in postmetamorphic individuals. In fact, frog size at completion of metamorphosis was slightly smaller at 85 ??g Cd L-1relative to control animals. These observations confirmed a recent report of a Cd concentration-dependent bimodal growth pattern in late-premetamorphic Xenopus tadpoles, but also showed that growth responses to varying Cd concentrations change with development. The fraction of animals initiating or completing metamorphosis during days 50-86 was reduced in a Cd concentration-dependent manner. Testicular histology and population sex ratios were unaffected by Cd suggesting that, unlike mammals, Cd is not strongly estrogenic in Xenopus tadpoles. ?? 2009 Elsevier Ltd.

  12. Effects of cadmium on growth, metamorphosis and gonadal sex differentiation in tadpoles of the African clawed frog, Xenopus laevis

    USGS Publications Warehouse

    Sharma, Bibek; Patino, Reynaldo

    2009-01-01

    Xenopus laevis larvae were exposed to cadmium (Cd) at 0, 1, 8. 85 or 860 mu g L(-1) in FETAX medium from 0 to 86 d postfertilization. Premetamorphic tadpoles were sampled on day 3 1; pre and prometamorphic tadpoles on day 49; and frogs (NF stage 66) between days 50 and 86. Survival, snout-vent length (SVL), tail length, total length, hindlimb length (HLL), initiation of metamorphic climax, size at and completion of metamorphosis, and gonadal condition and sex ratio (assessed histologically) were determined. Survival was unaffected by Cd until day 49, but increased mortality was observed after day 49 at 860 mu g Cd L(-1). On day 31, when tadpoles were in early premetamorphosis, inhibitory effects on tadpole growth were observed only at 860 mu g Cd L(-1). On day 49, when most tadpoles where in late premetamorphosis/early prometamorphosis, reductions in SVL, HLL and total length were observed at 8 and 860 but not 85 mu g L(-1), thus creating a U-shaped size distribution at 0-85 mu g Cd L(-1). However, this U-shaped size pattern was not evident in postmetamorphic individuals. In fact, frog size at completion of metamorphosis was slightly smaller at 85 mu g Cd L(-1) relative to control animals. These observations confirmed a recent report of a Cd concentration-dependent bimodal growth pattern in late-premetamorphic Xenopus tadpoles, but also showed that growth responses to varying Cd concentrations change with development. The fraction of animals initiating or completing metamorphosis during days 50-86 was reduced in a Cd concentration-dependent manner. Testicular histology and population sex ratios were unaffected by Cd suggesting that, unlike mammals, Cd is not strongly estrogenic in Xenopus tadpoles.

  13. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  14. The influence of pore textures on the permeability of volcanic rocks

    NASA Astrophysics Data System (ADS)

    Mueller, S.; Spieler, O.; Scheu, B.; Dingwell, D.

    2006-12-01

    The permeability of a porous medium is strongly dependent on its porosity, as a higher proportion of pore volume is generally expected to lead to a greater probability of pore interconnectedness and the formation of a fluid-flow providing pathway. However, the relationship between permeability and porosity is not a unique one, as many other textural parameters may play an important role and substantially affect gas flow properties. Among these parameters are (a) the connection geometry (i.e. intergranular pore spaces in clastic sediments vs. bubble interconnections), (b) the pore sizes, (c) pore shape and (d) pore size distribution. The gas permeability of volcanic rocks may influence various eruptive processes. The transition from a quiescent degassing dome to rock failure (fragmentation) may, for example, be controlled by the rock's permeability, in as much as it affects the speed by which a gas overpressure in vesicles is reduced in response to decompression. It is therefore essential to understand and quantify influences of different pore textures on the degassing properties of volcanic rocks, as well as investigate the effects of permeability on eruptive processes. Using a modified shock-tube-based fragmentation apparatus, we have measured unsteady-state permeability at a high initial pressure differential. Following sudden decompression above the rock cylinder, pressurized gas flows through the sample in a steel autoclave. A transient 1D filtration code has been developed to calculate permeability using the experimental pressure decay curve within a defined volume below the sample. An external furnace around the autoclave and the use of compressed salt as sealant allows also measurements at high temperatures up to 800 °C. Over 130 permeability measurements have been performed on samples of different volcanic settings, covering a wide range of porosity. The results show a general positive relationship between porosity and permeability with a high data scatter. Analysis of the samples eruptive origin as well as the pore sizes, shapes and size distribution allow an estimation of the contribution of various textural effects to the overall permeability.

  15. Implementation of a quality improvement initiative in Belgian diabetic foot clinics: feasibility and initial results.

    PubMed

    Doggen, Kris; Van Acker, Kristien; Beele, Hilde; Dumont, Isabelle; Félix, Patricia; Lauwers, Patrick; Lavens, Astrid; Matricali, Giovanni A; Randon, Caren; Weber, Eric; Van Casteren, Viviane; Nobels, Frank

    2014-07-01

    This article aims to describe the implementation and initial results of an audit-feedback quality improvement initiative in Belgian diabetic foot clinics. Using self-developed software and questionnaires, diabetic foot clinics collected data in 2005, 2008 and 2011, covering characteristics, history and ulcer severity, management and outcome of the first 52 patients presenting with a Wagner grade ≥ 2 diabetic foot ulcer or acute neuropathic osteoarthropathy that year. Quality improvement was encouraged by meetings and by anonymous benchmarking of diabetic foot clinics. The first audit-feedback cycle was a pilot study. Subsequent audits, with a modified methodology, had increasing rates of participation and data completeness. Over 85% of diabetic foot clinics participated and 3372 unique patients were sampled between 2005 and 2011 (3312 with a diabetic foot ulcer and 111 with acute neuropathic osteoarthropathy). Median age was 70 years, median diabetes duration was 14 years and 64% were men. Of all diabetic foot ulcers, 51% were plantar and 29% were both ischaemic and deeply infected. Ulcer healing rate at 6 months significantly increased from 49% to 54% between 2008 and 2011. Management of diabetic foot ulcers varied between diabetic foot clinics: 88% of plantar mid-foot ulcers were off-loaded (P10-P90: 64-100%), and 42% of ischaemic limbs were revascularized (P10-P90: 22-69%) in 2011. A unique, nationwide quality improvement initiative was established among diabetic foot clinics, covering ulcer healing, lower limb amputation and many other aspects of diabetic foot care. Data completeness increased, thanks in part to questionnaire revision. Benchmarking remains challenging, given the many possible indicators and limited sample size. The optimized questionnaire allows future quality of care monitoring in diabetic foot clinics. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Composite grain size sensitive and grain size insensitive creep of bischofite, carnallite and mixed bischofite-carnallite-halite salt rock

    NASA Astrophysics Data System (ADS)

    Muhammad, Nawaz; de Bresser, Hans; Peach, Colin; Spiers, Chris

    2016-04-01

    Deformation experiments have been conducted on rock samples of the valuable magnesium and potassium salts bischofite and carnallite, and on mixed bischofite-carnallite-halite rocks. The samples have been machined from a natural core from the northern part of the Netherlands. Main aim was to produce constitutive flow laws that can be applied at the in situ conditions that hold in the undissolved wall rock of caverns resulting from solution mining. The experiments were triaxial compression tests carried out at true in situ conditions of 70° C temperature and 40 MPa confining pressure. A typical experiment consisted of a few steps at constant strain rate, in the range 10-5 to 10-8 s-1, interrupted by periods of stress relaxation. During the constant strain rate part of the test, the sample was deformed until a steady (or near steady) state of stress was reached. This usually required about 2-4% of shortening. Then the piston was arrested and the stress on the sample was allowed to relax until the diminishing force on the sample reached the limits of the load cell resolution, usually at a strain rate in the order of 10-9 s-1. The duration of each relaxation step was a few days. Carnallite was found to be 4-5 times stronger than bischofite. The bischofite-carnallite-halite mixtures, at their turn, were stronger than carnallite, and hence substantially stronger than pure bischofite. For bischofite as well as carnallite, we observed that during stress relaxation, the stress exponent nof a conventional power law changed from ˜5 at strain rate 10-5 s-1 to ˜1 at 10-9 s-1. The absolute strength of both materials remained higher if relaxation started at a higher stress, i.e. at a faster strain rate. We interpret this as indicating a difference in microstructure at the initiation of the relaxation, notably a smaller grain size related to dynamical recrystallization during the constant strain rate step. The data thus suggest that there is a gradual change in deformation mechanism with decreasing strain rate for both bischofite and carnallite, from grain size insensitive (GSI) dislocation creep at the higher strain rates to grain size sensitive (GSS, i.e. pressure solution) creep at slow strain rate. We can speculate about the composite GSI-GSS nature of the constitutive laws describing the creep of the salt materials.

  17. Spatio-temporal Evolution of Velocity Structure, Concentration and Grain-size Stratification within Experimental Particulate Gravity Flows: Potential Input Parameters for Numerical Models

    NASA Astrophysics Data System (ADS)

    McCaffrey, W.; Choux, C.; Baas, J.; Haughton, P.

    2001-12-01

    Little is known about the combined spatio-temporal evolution of velocity structure, concentration and grain size stratification within particulate gravity currents. Yet these data are of primary importance for numerical model validation, prior to application to natural flows, such as pyroclastic density currents and turbidity currents. A comprehensive study was carried out on a series of experimental particulate gravity flows of 5% by volume initial concentration. The sediment analogue was polydisperse silica flour (mean grain size ~8 microns). A uniform 30 liter suspension was prepared in an overhead reservoir, then allowed to drain (in about one minute) into an flume 10 m long and 0.3 m wide, water-filled to a depth of 0.3 m. Each flow was siphoned continuously for 52 s at 5 different heights (spaced evenly from 0.6 to 4.6 cm) with samples collected at a frequency of 0.25Hz, generating 325 samples for grain-size and concentration analysis. Simultaneously, six 4-MHz UDVP (Ultrasonic Doppler Velocity Profiling) probes recorded the horizontal component of flow velocity. All but the highest probe were positioned at the same height as the siphons. The sampling location was shifted 1.32m down-current for each of five nominally identical flows, yielding sample locations at 1.32, 2.64, 3.96, 5.28 and 6.60m from the inlet point. These data can be combined to give both the temporal and spatial evolution of a single idealised flow. The concentration data can be used to defined the structure of the flow. The flow first propagated as a jet, then became stratified. The length of the head increased with increasing distance from the reservoir (although the head propagation velocity was uniform). The maximum concentration was located at the base of the flow towards the rear of the head. Grain-size analysis showed that the head was enriched in coarse particles even at the most distal sampling location. Distinct flow stratification developed at a distance between 1.3 m and 2.6 m from the reservoir. In the body of the current, the suspended sediment was normally graded, whereas the tail exhibited inverse grading. This inverse grading may be linked to coarse particles in the head being swept upwards and backwards, then falling back into the body of the current. Alternatively, body turbulence may inhibit the settling of coarse particles. Turbulence may also explain the presence of coarse particles in the flow's head, with turbulence intensity apparently correlated with the flow competence.

  18. The cost of large numbers of hypothesis tests on power, effect size and sample size.

    PubMed

    Lazzeroni, L C; Ray, A

    2012-01-01

    Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.

  19. Constraining particle size-dependent plume sedimentation from the 17 June 1996 eruption of Ruapehu Volcano, New Zealand, using geophysical inversions

    NASA Astrophysics Data System (ADS)

    Klawonn, M.; Frazer, L. N.; Wolfe, C. J.; Houghton, B. F.; Rosenberg, M. D.

    2014-03-01

    Weak subplinian-plinian plumes pose frequent hazards to populations and aviation, yet many key parameters of these particle-laden plumes are, to date, poorly constrained. This study recovers the particle size-dependent mass distribution along the trajectory of a well-constrained weak plume by inverting the dispersion process of tephra fallout. We use the example of the 17 June 1996 Ruapehu eruption in New Zealand and base our computations on mass per unit area tephra measurements and grain size distributions at 118 sample locations. Comparisons of particle fall times and time of sampling collection, as well as observations during the eruption, reveal that particles smaller than 250 μm likely settled as aggregates. For simplicity we assume that all of these fine particles fell as aggregates of constant size and density, whereas we assume that large particles fell as individual particles at their terminal velocity. Mass fallout along the plume trajectory follows distinct trends between larger particles (d≥250 μm) and the fine population (d<250 μm) that are likely due to the two different settling behaviors (aggregate settling versus single-particle settling). In addition, we computed the resulting particle size distribution within the weak plume along its axis and find that the particle mode shifts from an initial 1φ mode to a 2.5φ mode 10 km from the vent and is dominated by a 2.5 to 3φ mode 10-180 km from vent, where the plume reaches the coastline and we do not have further field constraints. The computed particle distributions inside the plume provide new constraints on the mass transport processes within weak plumes and improve previous models. The distinct decay trends between single-particle settling and aggregate settling may serve as a new tool to identify particle sizes that fell as aggregates for other eruptions.

  20. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    PubMed

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  1. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  2. SU-G-TeP3-14: Three-Dimensional Cluster Model in Inhomogeneous Dose Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, J; Penagaricano, J; Narayanasamy, G

    2016-06-15

    Purpose: We aim to investigate 3D cluster formation in inhomogeneous dose distribution to search for new models predicting radiation tissue damage and further leading to new optimization paradigm for radiotherapy planning. Methods: The aggregation of higher dose in the organ at risk (OAR) than a preset threshold was chosen as the cluster whose connectivity dictates the cluster structure. Upon the selection of the dose threshold, the fractional density defined as the fraction of voxels in the organ eligible to be part of the cluster was determined according to the dose volume histogram (DVH). A Monte Carlo method was implemented tomore » establish a case pertinent to the corresponding DVH. Ones and zeros were randomly assigned to each OAR voxel with the sampling probability equal to the fractional density. Ten thousand samples were randomly generated to ensure a sufficient number of cluster sets. A recursive cluster searching algorithm was developed to analyze the cluster with various connectivity choices like 1-, 2-, and 3-connectivity. The mean size of the largest cluster (MSLC) from the Monte Carlo samples was taken to be a function of the fractional density. Various OARs from clinical plans were included in the study. Results: Intensive Monte Carlo study demonstrates the inverse relationship between the MSLC and the cluster connectivity as anticipated and the cluster size does not change with fractional density linearly regardless of the connectivity types. An initially-slow-increase to exponential growth transition of the MSLC from low to high density was observed. The cluster sizes were found to vary within a large range and are relatively independent of the OARs. Conclusion: The Monte Carlo study revealed that the cluster size could serve as a suitable index of the tissue damage (percolation cluster) and the clinical outcome of the same DVH might be potentially different.« less

  3. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  4. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  5. A rate-based transcutaneous CO2 sensor for noninvasive respiration monitoring.

    PubMed

    Chatterjee, M; Ge, X; Kostov, Y; Luu, P; Tolosa, L; Woo, H; Viscardi, R; Falk, S; Potts, R; Rao, G

    2015-05-01

    The pain and risk of infection associated with invasive blood sampling for blood gas measurements necessitate the search for reliable noninvasive techniques. In this work we developed a novel rate-based noninvasive method for a safe and fast assessment of respiratory status. A small sampler was built to collect the gases diffusing out of the skin. It was connected to a CO2 sensor through gas-impermeable tubing. During a measurement, the CO2 initially present in the sampler was first removed by purging it with nitrogen. The gases in the system were then recirculated between the sampler and the CO2 sensor, and the CO2 diffusion rate into the sampler was measured. Because the measurement is based on the initial transcutaneous diffusion rate, reaching mass transfer equilibrium and heating the skin is no longer required, thus, making it much faster and safer than traditional method. A series of designed experiments were performed to analyze the effect of the measurement parameters such as sampler size, measurement location, subject positions, and movement. After the factor analysis tests, the prototype was sent to a level IV NICU for clinical trial. The results show that the measured initial rate of increase in CO2 partial pressure is linearly correlated with the corresponding arterial blood gas measurements. The new approach can be used as a trending tool, making frequent blood sampling unnecessary for respiratory status monitoring.

  6. Determination on Damage Mechanism of the Planet Gear of Heavy Vehicle Final Drive

    NASA Astrophysics Data System (ADS)

    Ramdan, RD; Setiawan, R.; Sasmita, F.; Suratman, R.; Taufiqulloh

    2018-02-01

    The works focus on the investigation of damage mechanism of fractured in the form of spalling of the planet gears from the final drive assembly of 160-ton heavy vehicles. The objective of this work is to clearly understand the mechanism of damage. The work is the first stage of the on-going research on the remaining life estimation of such gears. The understanding of the damage mechanism is critical in order to provide accurate estimate of the gear’s remaining life with observed initial damage. The analysis was performed based on the metallurgy laboratory works, including visual observation, macro-micro fractography by optical stereo and optical microscope and micro-vickers hardness test. From visual observation it was observed pitting that form lining defect at common position, which is at gear flank position. From spalling sample it was observed ratchet mark at the boundary between macro pitting and the edge of fractured parts. Further observation on the cross-section of the samples by optical microscope confirm that initial micro pitting occur without spalling of the case hardened surface. Spalling occur when pitting achieve certain critical size, and occur at multiple initiation site of crack propagation. From the present research it was concluded that pitting was resulted due to repeated contact fatigue. In addition, development of micro to macro pitting as well as spalling occur at certain direction towards the top of the gear teeth.

  7. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  8. Results of Characterization and Retrieval Testing on Tank 241-C-109 Heel Solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callaway, William S.

    Eight samples of heel solids from tank 241-C-109 were delivered to the 222-S Laboratory for characterization and dissolution testing. After being drained thoroughly, one-half to two-thirds of the solids were off-white to tan solids that, visually, were fairly evenly graded in size from coarse silt (30-60 μm) to medium pebbles (8-16 mm). The remaining solids were mostly strongly cemented aggregates ranging from coarse pebbles (16-32 mm) to fine cobbles (6-15 cm) in size. Solid phase characterization and chemical analysis indicated that the air-dry heel solids contained ≈58 wt% gibbsite [Al(OH){sub 3}] and ≈37 wt% natrophosphate [Na{sub 7}F(PO{sub 4}){sub 2}·19H{sub 2}O].more » The strongly cemented aggregates were mostly fine-grained gibbsite cemented with additional gibbsite. Dissolution testing was performed on two test samples. One set of tests was performed on large pieces of aggregate solids removed from the heel solids samples. The other set of dissolution tests was performed on a composite sample prepared from well-drained, air-dry heel solids that were crushed to pass a 1/4-in. sieve. The bulk density of the composite sample was 2.04 g/mL. The dissolution tests included water dissolution followed by caustic dissolution testing. In each step of the three-step water dissolution tests, a volume of water approximately equal to 3 times the initial volume of the test solids was added. In each step, the test samples were gently but thoroughly mixed for approximately 2 days at an average ambient temperature of 25 °C. The caustic dissolution tests began with the addition of sufficient 49.6 wt% NaOH to the water dissolution residues to provide ≈3.1 moles of OH for each mole of Al estimated to have been present in the starting composite sample and ≈2.6 moles of OH for each mole of Al potentially present in the starting aggregate sample. Metathesis of gibbsite to sodium aluminate was then allowed to proceed over 10 days of gentle mixing of the test samples at temperatures ranging from 26-30 °C. The metathesized sodium aluminate was then dissolved by addition of volumes of water approximately equal to 1.3 times the volumes of caustic added to the test slurries. Aluminate dissolution was allowed to proceed for 2 days at ambient temperatures of ≈29 °C. Overall, the sequential water and caustic dissolution tests dissolved and removed 80.0 wt% of the tank 241-C-109 crushed heel solids composite test sample. The 20 wt% of solids remaining after the dissolution tests were 85-88 wt% gibbsite. If the density of the residual solids was approximately equal to that of gibbsite, they represented ≈17 vol% of the initial crushed solids composite test sample. In the water dissolution tests, addition of a volume of water ≈6.9 times the initial volume of the crushed solids composite was sufficient to dissolve and recover essentially all of the natrophosphate present. The ratio of the weight of water required to dissolve the natrophosphate solids to the estimated weight of natrophosphate present was 8.51. The Environmental Simulation Program (OLI Systems, Inc., Morris Plains, New Jersey) predicts that an 8.36 w/w ratio would be required to dissolve the estimated weight of natrophosphate present in the absence of other components of the heel solids. Only minor amounts of Al-bearing solids were removed from the composite solids in the water dissolution tests. The caustic metathesis/aluminate dissolution test sequence, executed at temperatures ranging from 27-30 °C, dissolved and recovered ≈69 wt% of the gibbsite estimated to have been present in the initial crushed heel solids composite. This level of gibbsite recovery is consistent with that measured in previous scoping tests on the dissolution of gibbsite in strong caustic solutions. Overall, the sequential water and caustic dissolution tests dissolved and removed 80.3 wt% of the tank 241-C-109 aggregate solids test sample. The residual solids were 92-95 wt% gibbsite. Only a minor portion (≈4.5 wt%) of the aggregate solids was dissolved and recovered in the water dissolution test. Other than some smoothing caused by continuous mixing, the aggregates were essentially unaffected by the water dissolution tests. During the caustic metathesis/aluminate dissolution test sequence, ≈81 wt% of the gibbsite estimated to have been present in the aggregate solids was dissolved and recovered. The pieces of aggregate were significantly reduced in size but persisted as distinct pieces of solids. The increased level of gibbsite recovery, as compared to that for the crushed heel solids composite, suggests that the way the gibbsite solids and caustic solution are mixed is a key determinant of the overall efficiency of gibbsite dissolution and recovery. The liquids recovered after the caustic dissolution tests on the crushed solids composite and the aggregate solids were observed for 170 days. No precipitation of gibbsite was observed. The distribution of particle sizes in the residual solids recovered following the dissolution tests on the crushed heel solids composite was characterized. Wet sieving indicated that 21.4 wt% of the residual solids were >710 μm in size, and laser light scattering indicated that the median equivalent spherical diameter in the <710-μm solids was 35 μm. The settling behavior of the residual solids following the large-scale dissolution tests was also studied. When dispersed at a concentration of ≈1 vol% in water, ≈24 wt% of the residual solids settled at a rate >0.43 in./s; ≈68 wt% settled at rates between 0.02 and 0.43 in./s; and ≈7 wt% settled slower than 0.02 in./s.« less

  9. A comparative study of removal of fluoride from contaminated water using shale collected from different coal mines in India.

    PubMed

    Biswas, Gargi; Dutta, Manjari; Dutta, Susmita; Adhikari, Kalyan

    2016-05-01

    Low-cost water defluoridation technique is one of the most important issues throughout the world. In the present study, shale, a coal mine waste, is employed as novel and low-cost adsorbent to abate fluoride from simulated solution. Shale samples were collected from Mahabir colliery (MBS) and Sonepur Bazari colliery (SBS) of Raniganj coalfield in West Bengal, India, and used to remove fluoride. To increase the adsorption efficiency, shale samples were heat activated at a higher temperature and samples obtained at 550 °C are denoted as heat-activated Mahabir colliery shale (HAMBS550) and heat-activated Sonepur Bazari colliery shale (HASBS550), respectively. To prove the fluoride adsorption onto different shale samples and ascertain its mechanism, natural shale samples, heat-activated shale samples, and their fluoride-loaded forms were characterized using scanning electron microscopy, energy dispersive X-ray analysis, X-ray diffraction study, and Fourier transform infrared spectroscopy. The effect of different parameters such as pH, adsorbent dose, size of particles, and initial concentration of fluoride was investigated during fluoride removal in a batch contactor. Lower pH shows better adsorption in batch study, but it is acidic in nature and not suitable for direct consumption. However, increase of pH of the solution from 3.2 to 6.8 and 7.2 during fluoride removal process with HAMBS550 and HASBS550, respectively, confirms the applicability of the treated water for domestic purposes. HAMBS550 and HASBS550 show maximum removal of 88.3 and 88.5 %, respectively, at initial fluoride concentration of 10 mg/L, pH 3, and adsorbent dose of 70 g/L.

  10. Fabrication of Natural Uranium UO 2 Disks (Phase II): Texas A&M Work for Others Summary Document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerczak, Tyler J.; Baldwin, Charles A.; Schmidlin, Joshua E.

    The steps to fabricate natural UO 2 disks for an irradiation campaign led by Texas A&M University are outlined. The process was initiated with stoichiometry adjustment of parent, U 3O 8 powder. The next stage of sample preparation involved exploratory pellet pressing and sintering to achieve the desired natural UO 2 pellet densities. Ideal densities were achieved through the use of a bimodal powder size blend. The steps involved with disk fabrication are also presented, describing the coring and thinning process executed to achieve final dimensionality.

  11. Microfiltration of raw whole milk to select fractions with different fat globule size distributions: process optimization and analysis.

    PubMed

    Michalski, M C; Leconte, N; Briard-Bion, V; Fauquant, J; Maubois, J L; Goudédranche, H

    2006-10-01

    We present an extensive description and analysis of a microfiltration process patented in our laboratory to separate different fractions of the initial milk fat globule population according to the size of the native milk fat globules (MFG). We used nominal membrane pore sizes of 2 to 12 microm and a specially designed pilot rig. Using this process with whole milk [whose MFG have a volume mean diameter (d43) = 4.2 +/- 0.2 microm] and appropriate membrane pore size and hydrodynamic conditions, we collected 2 extremes of the initial milk fat globule distribution consisting of 1) a retentate containing large MFG of d43 = 5 to 7.5 microm (with up to 250 g/kg of fat, up to 35% of initial milk fat, and up to 10% of initial milk volume), and 2) a permeate containing small MFG of d43 = 0.9 to 3.3 microm (with up to 16 g/kg of fat, up to 30% of initial milk fat, and up to 83% of initial milk volume and devoid of somatic cells). We checked that the process did not mechanically damage the MFG by measuring their zeta-potential. This new microfiltration process, avoiding milk aging, appears to be more efficient than gravity separation in selecting native MFG of different sizes. As we summarize from previous and new results showing that the physico-chemical and technological properties of native milk fat globules vary according to their size, the use of different fat globule fractions appears to be advantageous regarding the quality of cheeses and can lead to new dairy products with adapted properties (sensory, functional, and perhaps nutritional).

  12. Surface properties of heat-induced soluble soy protein aggregates of different molecular masses.

    PubMed

    Guo, Fengxian; Xiong, Youling L; Qin, Fang; Jian, Huajun; Huang, Xiaolin; Chen, Jie

    2015-02-01

    Suspensions (2% and 5%, w/v) of soy protein isolate (SPI) were heated at 80, 90, or 100 °C for different time periods to produce soluble aggregates of different molecular sizes to investigate the relationship between particle size and surface properties (emulsions and foams). Soluble aggregates generated in these model systems were characterized by gel permeation chromatography and sodium dodecyl sulfate-polyacrylamide gel electrophoresis. Heat treatment increased surface hydrophobicity, induced SPI aggregation via hydrophobic interaction and disulfide bonds, and formed soluble aggregates of different sizes. Heating of 5% SPI always promoted large-size aggregate (LA; >1000 kDa) formation irrespective of temperature, whereas the aggregate size distribution in 2% SPI was temperature dependent: the LA fraction progressively rose with temperature (80→90→100 °C), corresponding to the attenuation of medium-size aggregates (MA; 670 to 1000 kDa) initially abundant at 80 °C. Heated SPI with abundant LA (>50%) promoted foam stability. LA also exhibited excellent emulsifying activity and stabilized emulsions by promoting the formation of small oil droplets covered with a thick interfacial protein layer. However, despite a similar influence on emulsion stability, MA enhanced foaming capacity but were less capable of stabilizing emulsions than LA. The functionality variation between heated SPI samples is clearly related to the distribution of aggregates that differ in molecular size and surface activity. The findings may encourage further research to develop functional SPI aggregates for various commercial applications. © 2015 Institute of Food Technologists®

  13. Trap elimination and reduction of size dispersion due to aging in CdS x Se1- x quantum dots

    NASA Astrophysics Data System (ADS)

    Verma, Abhishek; Nagpal, Swati; Pandey, Praveen K.; Bhatnagar, P. K.; Mathur, P. C.

    2007-12-01

    Quantum Dots of CdS x Se1- x embedded in borosilicate glass matrix have been grown using Double-Step annealing method. Optical characterization of the quantum dots has been done through the combinative analysis of optical absorption and photoluminescence spectroscopy at room temperature. Decreasing trend of photoluminescence intensity with aging has been observed and is attributed to trap elimination. The changes in particle size, size distribution, number of quantum dots, volume fraction, trap related phenomenon and Gibbs free energy of quantum dots, has been explained on the basis of the diffusion-controlled growth process, which continues with passage of time. For a typical case, it was found that after 24 months of aging, the average radii increased from 3.05 to 3.12 nm with the increase in number of quantum dots by 190% and the size-dispersion decreased from 10.8% to 9.9%. For this sample, the initial size range of the quantum dots was 2.85 to 3.18 nm. After that no significant change was found in these parameters for the next 12 months. This shows that the system attains almost a stable nature after 24 months of aging. It was also observed that the size-dispersion in quantum dots reduces with the increase in annealing duration, but at the cost of quantum confinement effect. Therefore, a trade off optimization has to be done between the size-dispersion and the quantum confinement.

  14. On sample size and different interpretations of snow stability datasets

    NASA Astrophysics Data System (ADS)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar aspect distributions to the large dataset. We used 100 different subsets for each sample size. Statistical variations obtained in the complete dataset were also tested on the smaller subsets using the Mann-Whitney or the Kruskal-Wallis test. For each subset size, the number of subsets were counted in which the significance level was reached. For these tests no nominal data scale was assumed. (iii) For the same subsets described above, the distribution of the aspect median was determined. A count of how often this distribution was substantially different from the distribution obtained with the complete dataset was made. Since two valid stability interpretations were available (an objective and a subjective interpretation as described above), the effect of the arbitrary choice of the interpretation on spatial variability results was tested. In over one third of the cases the two interpretations came to different results. The effect of these differences were studied in a similar method as described in (iii): the distribution of the aspect median was determined for subsets of the complete dataset using both interpretations, compared against each other as well as to the results of the complete dataset. For the complete dataset the two interpretations showed mainly identical results. Therefore the subset size was determined from the point at which the results of the two interpretations converged. A universal result for the optimal subset size cannot be presented since results differed between different situations contained in the dataset. The optimal subset size is thus dependent on stability variation in a given situation, which is unknown initially. There are indications that for some situations even the complete dataset might be not large enough. At a subset size of approximately 25, the significant differences between aspect groups (as determined using the whole dataset) were only obtained in one out of five situations. In some situations, up to 20% of the subsets showed a substantially different distribution of the aspect median. Thus, in most cases, 25 measurements (which can be achieved by six two-person teams in one day) did not allow to draw reliable conclusions.

  15. Patterns of initial management of node-negative breast cancer in two Canadian provinces

    PubMed Central

    Goel, V; Olivotto, I; Hislop, T G; Sawka, C; Coldman, A; Holowaty, E J

    1997-01-01

    OBJECTIVE: To describe the patterns of initial management of node-negative breast cancer in Ontario and British Columbia and to compare the characteristics of the patients and tumours and of the physicians and hospitals involved in management. DESIGN: Retrospective, population-based, cohort study. PARTICIPANTS: All 942 newly diagnosed cases of node-negative breast cancer in 1991 in British Columbia and a random sample of 938 newly diagnosed cases in Ontario in the same year. OUTCOME MEASURES: Number and proportion of patients with newly diagnosed node-negative breast cancer who received breast-conserving surgery (BCS) or mastectomy and who received radiation therapy after BCS. RESULTS: BCS was used in 413 cases (43.8%) in British Columbia and in 634 cases (67.6%) in Ontario (p < 0.001). After BCS, radiation therapy was received by 378 patients (91.5% of those who had undergone BCS) in British Columbia and 479 patients (75.6% of those who had undergone BCS) in Ontario (p < 0.001). In both provinces, lower patient age, smaller tumour size, a noncentral unifocal tumour, absence of extensive ductal carcinoma in situ and initial surgery by a surgeon with an academic affiliation were associated with greater use of BCS. Lower patient age and larger tumour size were associated with greater use of radiation therapy after BCS in both provinces. CONCLUSION: Patient, tumour and physician factors are associated with the choice of initial management of breast cancer in these two Canadian provinces. However, the differences in management between the two provinces are only partly explained by these factors. Other possible explanations, such as the presence of provincial guidelines, differences in the organization of the health care system or differences in patient preference, require further research. PMID:9006561

  16. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    ERIC Educational Resources Information Center

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  17. Investigations of rapid thermal annealing induced structural evolution of ZnO: Ge nanocomposite thin films via GISAXS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceylan, Abdullah, E-mail: aceylanabd@yahoo.com; Ozcan, Yusuf; Orujalipoor, Ilghar

    2016-06-07

    In this work, we present in depth structural investigations of nanocomposite ZnO: Ge thin films by utilizing a state of the art grazing incidence small angle x-ray spectroscopy (GISAXS) technique. The samples have been deposited by sequential r.f. and d.c. sputtering of ZnO and Ge thin film layers, respectively, on single crystal Si(100) substrates. Transformation of Ge layers into Ge nanoparticles (Ge-np) has been initiated by ex-situ rapid thermal annealing of asprepared thin film samples at 600 °C for 30, 60, and 90 s under forming gas atmosphere. A special attention has been paid on the effects of reactive and nonreactivemore » growth of ZnO layers on the structural evolution of Ge-np. GISAXS analyses have been performed via cylindrical and spherical form factor calculations for different nanostructure types. Variations of the size, shape, and distributions of both ZnO and Ge nanostructures have been determined. It has been realized that GISAXS results are not only remarkably consistent with the electron microscopy observations but also provide additional information on the large scale size and shape distribution of the nanostructured components.« less

  18. Effect of copper sulphate treatment on natural phytoplanktonic communities.

    PubMed

    Le Jeune, Anne-Hélène; Charpin, Marie; Deluchat, Véronique; Briand, Jean-François; Lenain, Jean-François; Baudu, Michel; Amblard, Christian

    2006-12-01

    Copper sulphate treatment is widely used as a global and empirical method to remove or control phytoplankton blooms without precise description of the impact on phytoplanktonic populations. The effects of two copper sulphate treatments on natural phytoplanktonic communities sampled in the spring and summer seasons, were assessed by indoor mesocosm experiments. The initial copper-complexing capacity of each water sample was evaluated before each treatment. The copper concentrations applied were 80 microg l(-1) and 160 microg l(-1) of copper, below and above the water complexation capacity, respectively. The phytoplanktonic biomass recovered within a few days after treatment. The highest copper concentration, which generated a highly toxic environment, caused a global decrease in phytoplankton diversity, and led to the development and dominance of nanophytoplanktonic Chlorophyceae. In mesocosms treated with 80 microg l(-1) of copper, the effect on phytoplanktonic community size-class structure and composition was dependent on seasonal variation. This could be related to differences in community composition, and thus to species sensitivity to copper and to differences in copper bioavailability between spring and summer. Both treatments significantly affected cyanobacterial biomass and caused changes in the size-class structure and composition of phytoplanktonic communities which may imply modifications of the ecosystem structure and function.

  19. [Practical aspects regarding sample size in clinical research].

    PubMed

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  20. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    PubMed Central

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  1. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    PubMed

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  2. Alpha-spectrometry and fractal analysis of surface micro-images for characterisation of porous materials used in manufacture of targets for laser plasma experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aushev, A A; Barinov, S P; Vasin, M G

    2015-06-30

    We present the results of employing the alpha-spectrometry method to determine the characteristics of porous materials used in targets for laser plasma experiments. It is shown that the energy spectrum of alpha-particles, after their passage through porous samples, allows one to determine the distribution of their path length in the foam skeleton. We describe the procedure of deriving such a distribution, excluding both the distribution broadening due to statistical nature of the alpha-particle interaction with an atomic structure (straggling) and hardware effects. The fractal analysis of micro-images is applied to the same porous surface samples that have been studied bymore » alpha-spectrometry. The fractal dimension and size distribution of the number of the foam skeleton grains are obtained. Using the data obtained, a distribution of the total foam skeleton thickness along a chosen direction is constructed. It roughly coincides with the path length distribution of alpha-particles within a range of larger path lengths. It is concluded that the combined use of the alpha-spectrometry method and fractal analysis of images will make it possible to determine the size distribution of foam skeleton grains (or pores). The results can be used as initial data in theoretical studies on propagation of the laser and X-ray radiation in specific porous samples. (laser plasma)« less

  3. Modeling aboveground biomass of Tamarix ramosissima in the Arkansas River Basin of Southeastern Colorado, USA

    USGS Publications Warehouse

    Evangelista, P.; Kumar, S.; Stohlgren, T.J.; Crall, A.W.; Newman, G.J.

    2007-01-01

    Predictive models of aboveground biomass of nonnative Tamarix ramosissima of various sizes were developed using destructive sampling techniques on 50 individuals and four 100-m2 plots. Each sample was measured for average height (m) of stems and canopy area (m2) prior to cutting, drying, and weighing. Five competing regression models (P < 0.05) were developed to estimate aboveground biomass of T. ramosissima using average height and/or canopy area measurements and were evaluated using Akaike's Information Criterion corrected for small sample size (AICc). Our best model (AICc = -148.69, ??AICc = 0) successfully predicted T. ramosissima aboveground biomass (R2 = 0.97) and used average height and canopy area as predictors. Our 2nd-best model, using the same predictors, was also successful in predicting aboveground biomass (R2 = 0.97, AICc = -131.71, ??AICc = 16.98). A 3rd model demonstrated high correlation between only aboveground biomass and canopy area (R2 = 0.95), while 2 additional models found high correlations between aboveground biomass and average height measurements only (R2 = 0.90 and 0.70, respectively). These models illustrate how simple field measurements, such as height and canopy area, can be used in allometric relationships to accurately predict aboveground biomass of T. ramosissima. Although a correction factor may be necessary for predictions at larger scales, the models presented will prove useful for many research and management initiatives.

  4. Effect of the Grain Size of the Initial Structure of 1565chM Alloy on the Structure and Properties of the Joints Fabricated by Friction Stir Welding

    NASA Astrophysics Data System (ADS)

    Ovchinnikov, V. V.; Drits, A. M.; Gureeva, M. A.; Malov, D. V.

    2017-12-01

    The effect of the initial grain size in the structure of the aluminum 1565chM alloy on the mechanical properties of the welded joints formed by friction stir welding and on the grain size in the weld core is studied. It is shown that the design of tool and, especially, the parameters of a screw groove exert a great effect on the grain size in the weld core.

  5. Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests

    Treesearch

    Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford

    1995-01-01

    To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...

  6. Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods

    DTIC Science & Technology

    2016-11-01

    ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample

  7. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  8. Freezing of gait and fall detection in Parkinson's disease using wearable sensors: a systematic review.

    PubMed

    Silva de Lima, Ana Lígia; Evers, Luc J W; Hahn, Tim; Bataille, Lauren; Hamilton, Jamie L; Little, Max A; Okuma, Yasuyuki; Bloem, Bastiaan R; Faber, Marjan J

    2017-08-01

    Despite the large number of studies that have investigated the use of wearable sensors to detect gait disturbances such as Freezing of gait (FOG) and falls, there is little consensus regarding appropriate methodologies for how to optimally apply such devices. Here, an overview of the use of wearable systems to assess FOG and falls in Parkinson's disease (PD) and validation performance is presented. A systematic search in the PubMed and Web of Science databases was performed using a group of concept key words. The final search was performed in January 2017, and articles were selected based upon a set of eligibility criteria. In total, 27 articles were selected. Of those, 23 related to FOG and 4 to falls. FOG studies were performed in either laboratory or home settings, with sample sizes ranging from 1 PD up to 48 PD presenting Hoehn and Yahr stage from 2 to 4. The shin was the most common sensor location and accelerometer was the most frequently used sensor type. Validity measures ranged from 73-100% for sensitivity and 67-100% for specificity. Falls and fall risk studies were all home-based, including samples sizes of 1 PD up to 107 PD, mostly using one sensor containing accelerometers, worn at various body locations. Despite the promising validation initiatives reported in these studies, they were all performed in relatively small sample sizes, and there was a significant variability in outcomes measured and results reported. Given these limitations, the validation of sensor-derived assessments of PD features would benefit from more focused research efforts, increased collaboration among researchers, aligning data collection protocols, and sharing data sets.

  9. In a 21-2n deformed stainless steel influence of recovery temperature

    NASA Astrophysics Data System (ADS)

    De Ita, A.; Ugalde, P.; Flores, D.

    2017-01-01

    We present the influence high heat treatment temperature of a nitrogen austenitic stainless steel, deform by cold compression, in 10 different percentages. The steel contains high chromium (19.25 %), nickel (1.5 %) and nitrogen (0.2 %). The typical applications for this alloy are automobile parts and special valves for his excellent mechanical properties and corrosion resistance. Produced by hot rolling, they were subjected homogenized treatment at 975 °C for 45 minutes. Subsequently, deformed, by cold compression. We get ten different deformations, from 3 % to 22 %. These samples then to a heat treatment at 750 °C for one, 2 and 4 hours respectively. To observe the microstructure all samples were metallographic study and measured also their Rockwell C hardness. The initial sample has an austenitic matrix with a small amount of precipitates with a 42 RC average hardness. The homogenized sample had a 39 RC hardness. The deformed samples increased their hardness with a maximum of 49 RC. The samples with the treatment, showed a lower hardness with longer time with high dispersion. The decreased of hardness is due to the elimination of residual stresses and precipitates increasing size.

  10. Augmenting the logrank test in the design of clinical trials in which non-proportional hazards of the treatment effect may be anticipated.

    PubMed

    Royston, Patrick; Parmar, Mahesh K B

    2016-02-11

    Most randomized controlled trials with a time-to-event outcome are designed assuming proportional hazards (PH) of the treatment effect. The sample size calculation is based on a logrank test. However, non-proportional hazards are increasingly common. At analysis, the estimated hazards ratio with a confidence interval is usually presented. The estimate is often obtained from a Cox PH model with treatment as a covariate. If non-proportional hazards are present, the logrank and equivalent Cox tests may lose power. To safeguard power, we previously suggested a 'joint test' combining the Cox test with a test of non-proportional hazards. Unfortunately, a larger sample size is needed to preserve power under PH. Here, we describe a novel test that unites the Cox test with a permutation test based on restricted mean survival time. We propose a combined hypothesis test based on a permutation test of the difference in restricted mean survival time across time. The test involves the minimum of the Cox and permutation test P-values. We approximate its null distribution and correct it for correlation between the two P-values. Using extensive simulations, we assess the type 1 error and power of the combined test under several scenarios and compare with other tests. We investigate powering a trial using the combined test. The type 1 error of the combined test is close to nominal. Power under proportional hazards is slightly lower than for the Cox test. Enhanced power is available when the treatment difference shows an 'early effect', an initial separation of survival curves which diminishes over time. The power is reduced under a 'late effect', when little or no difference in survival curves is seen for an initial period and then a late separation occurs. We propose a method of powering a trial using the combined test. The 'insurance premium' offered by the combined test to safeguard power under non-PH represents about a single-digit percentage increase in sample size. The combined test increases trial power under an early treatment effect and protects power under other scenarios. Use of restricted mean survival time facilitates testing and displaying a generalized treatment effect.

  11. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach

    NASA Technical Reports Server (NTRS)

    Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.

  12. Settling Efficiency of Urban Particulate Matter Transported by Stormwater Runoff.

    PubMed

    Carbone, Marco; Penna, Nadia; Piro, Patrizia

    2015-09-01

    The main purpose of control measures in urban areas is to retain particulate matter washed out by stormwater over impermeable surfaces. In stormwater control measures, particulate matter removal typically occurs via sedimentation. Settling column tests were performed to examine the settling efficiency of such units using monodisperse and heterodisperse particulate matter (for which the particle size distributions were measured and modelled by the cumulative gamma distribution). To investigate the dependence of settling efficiency from the particulate matter, a variant of the evolutionary polynomial regression (EPR), a Microsoft Excel function based on multi-objective EPR technique (EPR-MOGA), called EPR MOGA XL, was used as a data-mining strategy. The results from this study have shown that settling efficiency is a function of the initial total suspended solids (TSS) concentration and of the median diameter (d50 index), obtained from the particle size distributions (PSDs) of the samples.

  13. Evaluation of Aluminum Participation in the Development of Reactive Waves in Shock Compressed HMX

    NASA Astrophysics Data System (ADS)

    Pahl, R. J.; Trott, W. M.; Snedigar, S.; Castañeda, J. N.

    2006-07-01

    A series of gas gun tests has been performed to examine contributions to energy release from micron-sized and nanometric aluminum powder added to sieved (212-300μm) HMX. In the absence of added metal, 4-mm-thick, low-density (64-68% of theoretical maximum density) pressings of the sieved HMX respond to modest shock loading by developing distinctive reactive waves that exhibit both temporal and mesoscale spatial fluctuations. Parallel tests have been performed on samples containing 10% (by mass) aluminum in two particle sizes: 2-μm and 123-nm mean particle diameter, respectively. The finely dispersed aluminum initially suppresses wave growth from HMX reactions; however, after a visible induction period, the added metal drives rapid increases in the transmitted wave particle velocity. Wave profile variations as a function of the aluminum particle diameter are discussed.

  14. Line-imaging velocimetry for observing spatially heterogeneous mechanical and chemical responses in plastic bonded explosives during impact.

    PubMed

    Bolme, C A; Ramos, K J

    2013-08-01

    A line-imaging velocity interferometer was implemented on a single-stage light gas gun to probe the spatial heterogeneity of mechanical response, chemical reaction, and initiation of detonation in explosives. The instrument is described in detail, and then data are presented on several shock-compressed materials to demonstrate the instrument performance on both homogeneous and heterogeneous samples. The noise floor of this diagnostic was determined to be 0.24 rad with a shot on elastically compressed sapphire. The diagnostic was then applied to two heterogeneous plastic bonded explosives: 3,3(')-diaminoazoxyfurazan (DAAF) and PBX 9501, where significant spatial velocity heterogeneity was observed during the build up to detonation. In PBX 9501, the velocity heterogeneity was consistent with the explosive grain size, however in DAAF, we observed heterogeneity on a much larger length scale than the grain size that was similar to the imaging resolution of the instrument.

  15. Line-imaging velocimetry for observing spatially heterogeneous mechanical and chemical responses in plastic bonded explosives during impact

    NASA Astrophysics Data System (ADS)

    Bolme, C. A.; Ramos, K. J.

    2013-08-01

    A line-imaging velocity interferometer was implemented on a single-stage light gas gun to probe the spatial heterogeneity of mechanical response, chemical reaction, and initiation of detonation in explosives. The instrument is described in detail, and then data are presented on several shock-compressed materials to demonstrate the instrument performance on both homogeneous and heterogeneous samples. The noise floor of this diagnostic was determined to be 0.24 rad with a shot on elastically compressed sapphire. The diagnostic was then applied to two heterogeneous plastic bonded explosives: 3,3'-diaminoazoxyfurazan (DAAF) and PBX 9501, where significant spatial velocity heterogeneity was observed during the build up to detonation. In PBX 9501, the velocity heterogeneity was consistent with the explosive grain size, however in DAAF, we observed heterogeneity on a much larger length scale than the grain size that was similar to the imaging resolution of the instrument.

  16. Effects of high power ultrasonic vibration on the cold compaction of titanium.

    PubMed

    Fartashvand, Vahid; Abdullah, Amir; Ali Sadough Vanini, Seyed

    2017-05-01

    Titanium has widely been used in chemical and aerospace industries. In order to overcome the drawbacks of cold compaction of titanium, the process was assisted by an ultrasonic vibration system. For this purpose, a uniaxial ultrasonic assisted cold powder compaction system was designed and fabricated. The process variables were powder size, compaction pressure and initial powder compact thickness. Density, friction force, ejection force and spring back of the fabricated samples were measured and studied. The density was observed to improve under the action of ultrasonic vibration. Fine size powders showed better results of consolidation while using ultrasonic vibration. Under the ultrasonic action, it is thought that the friction forces between the die walls and the particles and those friction forces among the powder particles are reduced. Spring back and ejection force didn't considerably change when using ultrasonic vibration. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Reproduction of the cold-water coral Primnoella chilensis (Philippi, 1894)

    NASA Astrophysics Data System (ADS)

    Rossin, Ashley M.; Waller, Rhian G.; Försterra, Gunter

    2017-07-01

    This study examined the reproduction of a cold-water coral, Primnoella chilensis (Philippi, 1894) from the Comau and Reñihué fjords in Chilean Patagonia. Samples were collected in September and November of 2012 and April, June, and September of 2013 from three sites within the two fjords. The sexuality, reproductive mode, spermatocyst stage, oocyte size, and fecundity were determined using histological techniques. This species is gonochoristic with one aberrant hermaphrodite identified in this study. Reproduction was found to be seasonal, with the initiation of oogenesis in September and suggested a broadcast spawning event between June and September. The maximum oocyte size was 752.96 μm, suggesting a lecithotrophic larvae. The maximum fecundity was 36 oocytes per polyp. Male individuals were only found in April and June. In June, all four spermatocyst stages were present. This suggests that spermatogenesis requires less time than oogenesis in P. chilensis.

  18. Pandoraviruses: amoeba viruses with genomes up to 2.5 Mb reaching that of parasitic eukaryotes.

    PubMed

    Philippe, Nadège; Legendre, Matthieu; Doutre, Gabriel; Couté, Yohann; Poirot, Olivier; Lescot, Magali; Arslan, Defne; Seltzer, Virginie; Bertaux, Lionel; Bruley, Christophe; Garin, Jérome; Claverie, Jean-Michel; Abergel, Chantal

    2013-07-19

    Ten years ago, the discovery of Mimivirus, a virus infecting Acanthamoeba, initiated a reappraisal of the upper limits of the viral world, both in terms of particle size (>0.7 micrometers) and genome complexity (>1000 genes), dimensions typical of parasitic bacteria. The diversity of these giant viruses (the Megaviridae) was assessed by sampling a variety of aquatic environments and their associated sediments worldwide. We report the isolation of two giant viruses, one off the coast of central Chile, the other from a freshwater pond near Melbourne (Australia), without morphological or genomic resemblance to any previously defined virus families. Their micrometer-sized ovoid particles contain DNA genomes of at least 2.5 and 1.9 megabases, respectively. These viruses are the first members of the proposed "Pandoravirus" genus, a term reflecting their lack of similarity with previously described microorganisms and the surprises expected from their future study.

  19. A feasibility study of X-ray phase-contrast mammographic tomography at the Imaging and Medical beamline of the Australian Synchrotron.

    PubMed

    Nesterets, Yakov I; Gureyev, Timur E; Mayo, Sheridan C; Stevenson, Andrew W; Thompson, Darren; Brown, Jeremy M C; Kitchen, Marcus J; Pavlov, Konstantin M; Lockie, Darren; Brun, Francesco; Tromba, Giuliana

    2015-11-01

    Results are presented of a recent experiment at the Imaging and Medical beamline of the Australian Synchrotron intended to contribute to the implementation of low-dose high-sensitivity three-dimensional mammographic phase-contrast imaging, initially at synchrotrons and subsequently in hospitals and medical imaging clinics. The effect of such imaging parameters as X-ray energy, source size, detector resolution, sample-to-detector distance, scanning and data processing strategies in the case of propagation-based phase-contrast computed tomography (CT) have been tested, quantified, evaluated and optimized using a plastic phantom simulating relevant breast-tissue characteristics. Analysis of the data collected using a Hamamatsu CMOS Flat Panel Sensor, with a pixel size of 100 µm, revealed the presence of propagation-based phase contrast and demonstrated significant improvement of the quality of phase-contrast CT imaging compared with conventional (absorption-based) CT, at medically acceptable radiation doses.

  20. Experimental demonstration of a two-phase population extinction hazard

    PubMed Central

    Drake, John M.; Shapiro, Jeff; Griffen, Blaine D.

    2011-01-01

    Population extinction is a fundamental biological process with applications to ecology, epidemiology, immunology, conservation biology and genetics. Although a monotonic relationship between initial population size and mean extinction time is predicted by virtually all theoretical models, attempts at empirical demonstration have been equivocal. We suggest that this anomaly is best explained with reference to the transient properties of ensembles of populations. Specifically, we submit that under experimental conditions, many populations escape their initially vulnerable state to reach quasi-stationarity, where effects of initial conditions are erased. Thus, extinction of populations initialized far from quasi-stationarity may be exposed to a two-phase extinction hazard. An empirical prediction of this theory is that the fit Cox proportional hazards regression model for the observed survival time distribution of a group of populations will be shown to violate the proportional hazards assumption early in the experiment, but not at later times. We report results of two experiments with the cladoceran zooplankton Daphnia magna designed to exhibit this phenomenon. In one experiment, habitat size was also varied. Statistical analysis showed that in one of these experiments a transformation occurred so that very early in the experiment there existed a transient phase during which the extinction hazard was primarily owing to the initial population size, and that this was gradually replaced by a more stable quasi-stationary phase. In the second experiment, only habitat size unambiguously displayed an effect. Analysis of data pooled from both experiments suggests that the overall extinction time distribution in this system results from the mixture of extinctions during the initial rapid phase, during which the effects of initial population size can be considerable, and a longer quasi-stationary phase, during which only habitat size has an effect. These are the first results, to our knowledge, of a two-phase population extinction process. PMID:21429907

  1. Electrical and magnetic properties of nano-sized magnesium ferrite

    NASA Astrophysics Data System (ADS)

    T, Smitha; X, Sheena; J, Binu P.; Mohammed, E. M.

    2015-02-01

    Nano-sized magnesium ferrite was synthesized using sol-gel techniques. Structural characterization was done using X-ray diffractometer and Fourier Transform Infrared Spectrometer. Vibration Sample Magnetometer was used to record the magnetic measurements. XRD analysis reveals the prepared sample is single phasic without any impurity. Particle size calculation shows the average crystallite size of the sample is 19nm. FTIR analysis confirmed spinel structure of the prepared samples. Magnetic measurement study shows that the sample is ferromagnetic with high degree of isotropy. Hysterisis loop was traced at temperatures 100K and 300K. DC electrical resistivity measurements show semiconducting nature of the sample.

  2. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  3. Pros-IT CNR: an Italian prostate cancer monitoring project.

    PubMed

    Noale, Marianna; Maggi, Stefania; Artibani, Walter; Bassi, Pier Francesco; Bertoni, Filippo; Bracarda, Sergio; Conti, Giario Natale; Corvò, Renzo; Gacci, Mauro; Graziotti, Pierpaolo; Magrini, Stefano Maria; Maurizi Enrici, Riccardo; Mirone, Vincenzo; Montironi, Rodolfo; Muto, Giovanni; Pecoraro, Stefano; Porreca, Angelo; Ricardi, Umberto; Tubaro, Andrea; Zagonel, Vittorina; Zattoni, Filiberto; Crepaldi, Gaetano

    2017-04-01

    The Pros-IT CNR project aims to monitor a sample of Italian males ≥18 years of age who have been diagnosed in the participating centers with incident prostate cancer, by analyzing their clinical features, treatment protocols and outcome results in relation to quality of life. Pros-IT CNR is an observational, prospective, multicenter study. The National Research Council (CNR), Neuroscience Institute, Aging Branch (Padua) is the promoting center. Ninety-seven Italian centers located throughout Italy were involved. The field study began in September 1, 2014. Subjects eligible were diagnosed with biopsy-verified prostate cancer, naïve. A sample size of 1500 patients was contemplated. A baseline assessment including anamnestic data, clinical history, risk factors, the initial diagnosis, cancer staging information and quality of life (Italian UCLA Prostate Cancer Index; SF-12 Scale) was completed. Six months after the initial diagnosis, a second assessment evaluating the patient's health status, the treatment carried out, and the quality of life will be made. A third assessment, evaluating the treatment follow-up and the quality of life, will be made 12 months after the initial diagnosis. The 4th, 5th, 6th and 7th assessments, similar to the third, will be completed 24, 36, 48 and 60 months after the initial diagnosis, respectively, and will include also a Food Frequency Questionnaire and the Physical Activity Scale for the Elderly. The study will provide information on patients' quality of life and its variations over time in relation to the treatments received for the prostate cancer.

  4. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    PubMed

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  5. Large Area and Short-Pulse Shock Initiation of a Tatb/hmx Mixed Explosive

    NASA Astrophysics Data System (ADS)

    Guiji, Wang; Chengwei, Sun; Jun, Chen; Cangli, Liu; Jianheng, Zhao; Fuli, Tan; Ning, Zhang

    2007-12-01

    The large area and short-pulse shock initiation experiments on the plastic bonded mixed explosive of TATB(80%) and HMX(15%) have been performed with an electric gun where a Mylar flyer of 10-19 mm in diameter and 0.05˜0.30 mm in thickness was launched by an electrically exploding metallic bridge foil. The cylindrical explosive specimens (Φ16 mm×8 mm in size) were initiated by the Mylar flyers in thickness of 0.07˜0.20 mm, which induced shock pressure in specimen was of duration ranging from 0.029 to 0.109 μs. The experimental data were treated with the DRM(Delayed Robbins-Monro) procedure and to provide the initiation threshold of flyer velocities at 50% probability are 3.398˜1.713 km/s and that of shock pressure P 13.73˜5.23 GPa, respectively for different pulse durations. The shock initiation criteria of the explosive specimen at 50% and 100% probabilities are yielded. In addition, the 30° wedged sample was tested and the shock to detonation transition (SDT) process emerging on its inclined surface was diagnosed with a device consisting of multiple optical fiber probe, optoelectronic transducer and digital oscilloscope. The POP plot of the explosive has been gained from above SDT data.

  6. Hot spot-derived shock initiation phenomena in heterogeneous nitromethane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dattelbaum, Dana M; Sheffield, Stephen A; Stahl, David B

    2009-01-01

    The addition of solid silica particles to gelled nitromethane offers a tractable model system for interrogating the role of impedance mismatches as one type of hot spot 'seed' on the initiation behaviors of explosive formulations. Gas gun-driven plate impact experiments are used to produce well-defined shock inputs into nitromethane-silica mixtures containing size-selected silica beads at 6 wt%. The Pop-plots or relationships between shock input pressure and rundistance (or time)-to-detonation for mixtures containing small (1-4 {micro}m) and large (40 {micro}m) beads are presented. Overall, the addition of beads was found to influence the shock sensitivity of the mixtures, with the smallermore » beads being more sensitizing than the larger beads, lowering the shock initiation threshold for the same run distance to detonation compared with neat nitromethane. In addition, the use of embedded electromagnetic gauges provides detailed information pertaining to the mechanism of the build-up to detonation and associated reactive flow. Of note, an initiation mechanism characteristic of homogeneous liquid explosives, such as nitromethane, was observed in the nitromethane-40 {micro}m diameter silica samples at high shock input pressures, indicating that the influence of hot spots on the initiation process was minimal under these conditions.« less

  7. Pre- and post-irradiation characterization and properties measurements of ZrC coated surrogate TRISO particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasudevamurthy, Gokul; Katoh, Yutai; Hunn, John D

    2010-09-01

    Zirconium carbide is a candidate to either replace or supplement silicon carbide as a coating material in TRISO fuel particles for high temperature gas-cooled reactor fuels. Six sets of ZrC coated surrogate microsphere samples, fabricated by the Japan Atomic Energy Agency using the fluidized bed chemical vapor deposition method, were irradiated in the High Flux Isotope Reactor at the Oak Ridge National Laboratory. These developmental samples available for the irradiation experiment were in conditions of either as-fabricated coated particles or particles that had been heat-treated to simulate the fuel compacting process. Five sets of samples were composed of nominally stoichiometricmore » compositions, with the sixth being richer in carbon (C/Zr = 1.4). The samples were irradiated at 800 and 1250 C with fast neutron fluences of 2 and 6 dpa. Post-irradiation, the samples were retrieved from the irradiation capsules followed by microstructural examination performed at the Oak Ridge National Laboratory's Low Activation Materials Development and Analysis Laboratory. This work was supported by the US Department of Energy Office of Nuclear Energy's Advanced Gas Reactor program as part of International Nuclear Energy Research Initiative collaboration with Japan. This report includes progress from that INERI collaboration, as well as results of some follow-up examination of the irradiated specimens. Post-irradiation examination items included microstructural characterization, and nanoindentation hardness/modulus measurements. The examinations revealed grain size enhancement and softening as the primary effects of both heat-treatment and irradiation in stoichiometric ZrC with a non-layered, homogeneous grain structure, raising serious concerns on the mechanical suitability of these particular developmental coatings as a replacement for SiC in TRISO fuel. Samples with either free carbon or carbon-rich layers dispersed in the ZrC coatings experienced negligible grain size enhancement during both heat treatment and irradiation. However, these samples experienced irradiation induced softening similar to stoichiometric ZrC samples.« less

  8. Reexamining Sample Size Requirements for Multivariate, Abundance-Based Community Research: When Resources are Limited, the Research Does Not Have to Be.

    PubMed

    Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F

    2015-01-01

    Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.

  9. Dynamics of a black-capped chickadee population, 1958-1983

    USGS Publications Warehouse

    Loery, G.; Nichols, J.D.

    1985-01-01

    The dynamics of a wintering population of Black-capped Chickadees (Parus atricapillus) were studied from 1958-1983 using capture-recapture methods. The Jolly-Seber model was used to obtain annual estimates of population size, survival rate, and recruitment. The average estimated population size over this period was ?160 birds. The average estimated number of new birds entering the population each year and alive at the time of sampling was ?57. The arithmetic mean annual survival rate estimate was ?0.59. We tested hypothesis about possible relationships between these population parameters and (1) the natural introduction of Tufted Titmice (Parus bicolor) to the area, (2) the clear-cutting of portions of nearby red pine (Pinus resinosa) plantations, and (3) natural variations in winter temperatures. The chickadee population exhibited a substantial short-term decline following titmouse establishment, produced by decreases in both survival rate and number of new recruits. Survival rate decline somewhat after the initiation of the pine clear-cutting, but population size was very similar before and after clear-cutting. Weighted least squares analyses provided no evidence of a relationship between survival rate and either of two winter temperature variables.

  10. Is environmental sustainability a strategic priority for logistics service providers?

    PubMed

    Evangelista, Pietro; Colicchia, Claudia; Creazza, Alessandro

    2017-08-01

    Despite an increasing number of third-party logistics service providers (3PLs) regard environmental sustainability as a key area of management, there is still great uncertainty on how 3PLs implement environmental strategies and on how they translate green efforts into practice. Through a multiple case study analysis, this paper explores the environmental strategies of a sample of medium-sized 3PLs operating in Italy and the UK, in terms of environmental organizational culture, initiatives, and influencing factors. Our analysis shows that, notwithstanding environmental sustainability is generally recognised as a strategic priority, a certain degree of diversity in the deployment of environmental strategies still exists. This paper is original since the extant literature on green strategies of 3PLs provides findings predominantly from a single country perspective and mainly investigates large/multinational organizations. It also provides indications to help managers of medium-sized 3PLs in positioning their business. This is particularly meaningful in the 3PL industry, where medium-sized organizations significantly contribute to the generated turnover and market value. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Effects of Quartz Particle Size and Sucrose Addition on Melting Behavior of a Melter Feed for High-Level Waste Glass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcial, Jose; Hrma, Pavel R; Schweiger, Michael J

    2010-08-11

    The behavior of melter feed (a mixture of nuclear waste and glass-forming additives) during waste-glass processing has a significant impact on the rate of the vitrification process. We studied the effects of silica particle size and sucrose addition on the volumetric expansion (foaming) of a high-alumina feed and the rate of dissolution of silica particles in feed samples heated at 5°C/min up to 1200°C. The initial size of quartz particles in feed ranged from 5 to 195 µm. The fraction of the sucrose added ranged from 0 to 0.20 g per g glass. Extensive foaming occurred only in feeds withmore » 5-μm quartz particles; particles >150 µm formed clusters. Particles of 5 µm completely dissolved by 900°C whereas particles >150 µm did not fully dissolve even when the temperature reached 1200°C. Sucrose addition had virtually zero impact on both foaming and the dissolution of silica particles.« less

  12. Anticipatory Postural Adjustment During Self-Initiated, Cued, and Compensatory Stepping in Healthy Older Adults and Patients With Parkinson Disease.

    PubMed

    Schlenstedt, Christian; Mancini, Martina; Horak, Fay; Peterson, Daniel

    2017-07-01

    To characterize anticipatory postural adjustments (APAs) across a variety of step initiation tasks in people with Parkinson disease (PD) and healthy subjects. Cross-sectional study. Step initiation was analyzed during self-initiated gait, perceptual cued gait, and compensatory forward stepping after platform perturbation. People with PD were assessed on and off levodopa. University research laboratory. People (N=31) with PD (n=19) and healthy aged-matched subjects (n=12). Not applicable. Mediolateral (ML) size of APAs (calculated from center of pressure recordings), step kinematics, and body alignment. With respect to self-initiated gait, the ML size of APAs was significantly larger during the cued condition and significantly smaller during the compensatory condition (P<.001). Healthy subjects and patients with PD did not differ in body alignment during the stance phase prior to stepping. No significant group effect was found for ML size of APAs between healthy subjects and patients with PD. However, the reduction in APA size from cued to compensatory stepping was significantly less pronounced in PD off medication compared with healthy subjects, as indicated by a significant group by condition interaction effect (P<.01). No significant differences were found comparing patients with PD on and off medications. Specific stepping conditions had a significant effect on the preparation and execution of step initiation. Therefore, APA size should be interpreted with respect to the specific stepping condition. Across-task changes in people with PD were less pronounced compared with healthy subjects. Antiparkinsonian medication did not significantly improve step initiation in this mildly affected PD cohort. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  13. A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models

    ERIC Educational Resources Information Center

    Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.

    2013-01-01

    Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…

  14. Space Weathering of Intermediate-Size Soil Grains in Immature Apollo 17 Soil 71061

    NASA Technical Reports Server (NTRS)

    Wentworth, S. J.; Robinson, G. A.; McKay, D. S.

    2005-01-01

    Understanding space weathering, which is caused by micrometeorite impacts, implantation of solar wind gases, radiation damage, chemical effects from solar particles and cosmic rays, interactions with the lunar atmosphere, and sputter erosion and deposition, continues to be a primary objective of lunar sample research. Electron beam studies of space weathering have focused on space weathering effects on individual glasses and minerals from the finest size fractions of lunar soils [1] and patinas on lunar rocks [2]. We are beginning a new study of space weathering of intermediate-size individual mineral grains from lunar soils. For this initial work, we chose an immature soil (see below) in order to maximize the probability that some individual grains are relatively unweathered. The likelihood of identifying a range of relatively unweathered grains in a mature soil is low, and we plan to study grains ranging from pristine to highly weathered in order to determine the progression of space weathering. Future studies will include grains from mature soils. We are currently in the process of documenting splash glass, glass pancakes, craters, and accretionary particles (glass and mineral grains) on plagioclase from our chosen soil using high-resolution field emission scanning electron microscopy (FESEM). These studies are being done concurrently with our studies of patinas on larger lunar rocks [e.g., 3]. One of our major goals is to correlate the evidence for space weathering observed in studies of the surfaces of samples with the evidence demonstrated at higher resolution (TEM) using cross-sections of samples. For example, TEM studies verified the existence of vapor deposits on soil grains [1]; we do not yet know if they can be readily distinguished by surfaces studies of samples. A wide range of textures of rims on soil grains is also clear in TEM [1]; might it be possible to correlate them with specific characteristics of weathering features seen in SEM?

  15. A computer program for sample size computations for banding studies

    USGS Publications Warehouse

    Wilson, K.R.; Nichols, J.D.; Hines, J.E.

    1989-01-01

    Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.

  16. Flash Location, Size, and Rates Relative to the Evolving Kinematics and Microphysics of the 29 May 2012 DC3 Supercell Storm

    NASA Astrophysics Data System (ADS)

    MacGorman, D. R.; DiGangi, E.; Ziegler, C.; Biggerstaff, M. I.; Betten, D.; Bruning, E. C.

    2014-12-01

    A supercell thunderstorm was observed on 29 May 2012 during the Deep Convective Clouds and Chemistry (DC3) experiment. This storm was part of a cluster of severe storms and produced 5" hail, an EF-1 tornado, and copious lightning over the course of a few hours. During a period in which flash rates were increasing rapidly, observations were obtained from mobile polarimetric radars and a balloon-borne electric field meter (EFM) and particle imager, while aircraft sampled the chemistry of the inflow and anvil. In addition, the storm was within the domain of the 3-dimensional Oklahoma Lightning Mapping Array (LMA) and the S-band KTLX WSR-88D radar. The focus of this paper is the evolution of flash rates, the location of flash initiations, and the distribution of flash size and flash extent density as they relate to the evolving kinematics and microphysics of the storm for the approximately 30-minute period in which triple-Doppler coverage was available. Besides analyzing reflectivity structure and three-dimensional winds for the entire period, we examine mixing ratios of cloud water, cloud ice, rain, and graupel/hail that have been retrieved by a Lagrangian analysis for three select times, one each at the beginning, middle, and end of the period. Flashes in an around the updraft of this storm were typically small. Flash size tended to increase, and flash rates tended to decrease as distance from the updraft increased. Although flash initiations were most frequent near the updraft, some flashes were initiated near the edge of 30 dBZ cores and propagated into the anvil. Later, some flashes were initiated in the anvil itself, in vertical cells that formed and became electrified tens of kilometers downshear of the main body of the storm. Considerable lightning structure was inferred to be in regions dominated by cloud ice in the upper part of the storm. The continual small discharges in the overshooting top of the storm tended to be near or within 15 dBZ contours, although occasional discharges appeared to extend above the storm.

  17. Probability of coincidental similarity among the orbits of small bodies - I. Pairing

    NASA Astrophysics Data System (ADS)

    Jopek, Tadeusz Jan; Bronikowska, Małgorzata

    2017-09-01

    Probability of coincidental clustering among orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4,… members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We have tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample can be used to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.

  18. Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products

    NASA Astrophysics Data System (ADS)

    Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun

    2011-10-01

    To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."

  19. Effect of High Strain-Rate Deformation and Aging Temperature on the Evolution of Structure, Microhardness, and Wear Resistance of Low-Alloyed Cu-Cr-Zr Alloy

    NASA Astrophysics Data System (ADS)

    Kheifets, A. E.; Khomskaya, I. V.; Korshunov, L. G.; Zel'dovich, V. I.; Frolova, N. Yu.

    2018-04-01

    The effect of the preliminary high strain-rate deformation, performed via the method of dynamic channel-angular pressing (DCAP), and subsequent annealings on the tribological properties of a dispersionhardened Cu-0.092 wt % Cr-0.086 wt % Zr alloy has been investigated. It has been shown that the surfacelayer material of the alloy with a submicrocrystalline (SMC) structure obtained by the DCAP method can be strengthened using severe plastic deformation by sliding friction at the expense of creating a nanocrystalline structure with crystallites of 15-60 nm in size. It has been shown that the SMC structure obtained by the high strain-rate DCAP deformation decreases the wear rate of the samples upon sliding friction by a factor of 1.4 compared to the initial coarse-grained state. The maximum values of the microhardness and minimum values of the coefficient of friction and shear strength have been obtained in the samples preliminarily subjected to DCAP and aging at 400°C. The attained level of microhardness is 3350 MPa, which exceeds the microhardness of the alloy in the initial coarse-grained state by five times.

  20. Empowering a safer practice: PDAs are integral tools for nursing and health care.

    PubMed

    Hudson, Kathleen; Buell, Virginia

    2011-04-01

    This study's purpose was to assess the characteristics of personal digital assistant (PDA) uptake and use in both clinical and classroom work for baccalaureate student nurses (BSN) within a rural Texas university. Patient care has become more complicated, risk prone, automated and costly. Efficiencies at the bedside are needed to continue to provide safe and successful within this environment. Purposive sample of nursing students using PDAs throughout their educational processes, conducted at three campus sites. The initial sample size was 105 students, followed by 94 students at end of the first semester and 75 students at curriculum completion at the end of a 2-year period. Students completed structured and open-ended questions to assess their perspectives on PDA usage. Student uptake varied in relation to overall competency, with minimal to high utilization noted, and was influenced by the current product costs. PDAs are developing into useful clinical tools by providing quick and important information for safer care. Using bedside PDAs effectively assists with maintaining patient safety, efficiency of care delivery and staff satisfaction. This study evaluates the initial implementation of PDAs by students, our future multitasking nurses. © 2011 The Authors. Journal compilation © 2011 Blackwell Publishing Ltd.

Top