Sample records for fixed sample size

  1. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    PubMed

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  2. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    PubMed Central

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  3. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  4. Recommended protocols for sampling macrofungi

    Treesearch

    Gregory M. Mueller; John Paul Schmit; Sabine M. Hubndorf Leif Ryvarden; Thomas E. O' Dell; D. Jean Lodge; Patrick R. Leacock; Milagro Mata; Loengrin Umania; Qiuxin (Florence) Wu; Daniel L. Czederpiltz

    2004-01-01

    This chapter discusses several issues regarding reommended protocols for sampling macrofungi: Opportunistic sampling of macrofungi, sampling conspicuous macrofungi using fixed-size, sampling small Ascomycetes using microplots, and sampling a fixed number of downed logs.

  5. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  6. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  7. Post-stratified estimation: with-in strata and total sample size recommendations

    Treesearch

    James A. Westfall; Paul L. Patterson; John W. Coulston

    2011-01-01

    Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...

  8. Influence of preservative and mounting media on the size and shape of monogenean sclerites.

    PubMed

    Fankoua, Severin-Oscar; Bitja Nyom, Arnold R; Bahanak, Dieu Ne Dort; Bilong Bilong, Charles F; Pariselle, Antoine

    2017-08-01

    Based on Cichlidogyrus sp. (Monogenea, Ancyrocephalidae) specimens from Hemichromis sp. hosts, we tested the influence of different methods to fix/preserve samples/specimens [frozen material, alcohol or formalin preserved, museum process for fish preservation (fixed in formalin and preserved in alcohol)] and different media used to mount the slides [tap water, glycerin ammonium picrate (GAP), Hoyer's one (HM)] on the size/shape of sclerotized parts of monogenean specimens. The results show that the use of HM significantly increases the size of haptoral sclerites [marginal hooks I, II, IV, V, and VI; dorsal bar length, width, distance between auricles and auricle length, ventral bar length and width], and changes their shape [angle opening between shaft and guard (outer and inner roots) in both ventral and dorsal anchors, ventral bar much wider, dorsal one less curved]. This influence seems to be reduced when specimens/samples are fixed in formalin. The systematics of Monogenea being based on the size and shape of their sclerotized parts, to prevent misidentifications or description of invalid new species, we recommend the use of GAP as mounting medium; Hoyer's one should be restricted to monogenean specimens fixed for a long time which are more shrunken.

  9. Bias and Precision of Measures of Association for a Fixed-Effect Multivariate Analysis of Variance Model

    ERIC Educational Resources Information Center

    Kim, Soyoung; Olejnik, Stephen

    2005-01-01

    The sampling distributions of five popular measures of association with and without two bias adjusting methods were examined for the single factor fixed-effects multivariate analysis of variance model. The number of groups, sample sizes, number of outcomes, and the strength of association were manipulated. The results indicate that all five…

  10. High-Throughput Amplicon-Based Copy Number Detection of 11 Genes in Formalin-Fixed Paraffin-Embedded Ovarian Tumour Samples by MLPA-Seq

    PubMed Central

    Kondrashova, Olga; Love, Clare J.; Lunke, Sebastian; Hsu, Arthur L.; Waring, Paul M.; Taylor, Graham R.

    2015-01-01

    Whilst next generation sequencing can report point mutations in fixed tissue tumour samples reliably, the accurate determination of copy number is more challenging. The conventional Multiplex Ligation-dependent Probe Amplification (MLPA) assay is an effective tool for measurement of gene dosage, but is restricted to around 50 targets due to size resolution of the MLPA probes. By switching from a size-resolved format, to a sequence-resolved format we developed a scalable, high-throughput, quantitative assay. MLPA-seq is capable of detecting deletions, duplications, and amplifications in as little as 5ng of genomic DNA, including from formalin-fixed paraffin-embedded (FFPE) tumour samples. We show that this method can detect BRCA1, BRCA2, ERBB2 and CCNE1 copy number changes in DNA extracted from snap-frozen and FFPE tumour tissue, with 100% sensitivity and >99.5% specificity. PMID:26569395

  11. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.

    PubMed

    Hero, Alfred O; Rajaratnam, Bala

    2016-01-01

    When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.

  12. Conditional Optimal Design in Three- and Four-Level Experiments

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Borenstein, Michael

    2014-01-01

    The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…

  13. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  14. The dependence of halo mass on galaxy size at fixed stellar mass using weak lensing

    NASA Astrophysics Data System (ADS)

    Charlton, Paul J. L.; Hudson, Michael J.; Balogh, Michael L.; Khatri, Sumeet

    2017-12-01

    Stellar mass has been shown to correlate with halo mass, with non-negligible scatter. The stellar mass-size and luminosity-size relationships of galaxies also show significant scatter in galaxy size at fixed stellar mass. It is possible that, at fixed stellar mass and galaxy colour, the halo mass is correlated with galaxy size. Galaxy-galaxy lensing allows us to measure the mean masses of dark matter haloes for stacked samples of galaxies. We extend the analysis of the galaxies in the CFHTLenS catalogue by fitting single Sérsic surface brightness profiles to the lens galaxies in order to recover half-light radius values, allowing us to determine halo masses for lenses according to their size. Comparing our halo masses and sizes to baselines for that stellar mass yields a differential measurement of the halo mass-galaxy size relationship at fixed stellar mass, defined as Mh(M_{*}) ∝ r_{eff}^{η }(M_{*}). We find that, on average, our lens galaxies have an η = 0.42 ± 0.12, i.e. larger galaxies live in more massive dark matter haloes. The η is strongest for high-mass luminous red galaxies. Investigation of this relationship in hydrodynamical simulations suggests that, at a fixed M*, satellite galaxies have a larger η and greater scatter in the Mh and reff relationship compared to central galaxies.

  15. Phase-contrast X-ray computed tomography of non-formalin fixed biological objects

    NASA Astrophysics Data System (ADS)

    Takeda, Tohoru; Momose, Atsushi; Wu, Jin; Zeniya, Tsutomu; Yu, Quanwen; Thet-Thet-Lwin; Itai, Yuji

    2001-07-01

    Using a monolithic X-ray interferometer having the view size of 25 mm×25 mm, phase-contrast X-ray CT (PCCT) was performed for non-formalin fixed livers of two normal rats and a rabbit transplanted with VX-2 cancer. PCCT images of liver and cancer lesions resembled well those obtained by formalin fixed samples.

  16. A Fixed-Precision Sequential Sampling Plan for the Potato Tuberworm Moth, Phthorimaea operculella Zeller (Lepidoptera: Gelechidae), on Potato Cultivars.

    PubMed

    Shahbi, M; Rajabpour, A

    2017-08-01

    Phthorimaea operculella Zeller is an important pest of potato in Iran. Spatial distribution and fixed-precision sequential sampling for population estimation of the pest on two potato cultivars, Arinda ® and Sante ® , were studied in two separate potato fields during two growing seasons (2013-2014 and 2014-2015). Spatial distribution was investigated by Taylor's power law and Iwao's patchiness. Results showed that the spatial distribution of eggs and larvae was random. In contrast to Iwao's patchiness, Taylor's power law provided a highly significant relationship between variance and mean density. Therefore, fixed-precision sequential sampling plan was developed by Green's model at two precision levels of 0.25 and 0.1. The optimum sample size on Arinda ® and Sante ® cultivars at precision level of 0.25 ranged from 151 to 813 and 149 to 802 leaves, respectively. At 0.1 precision level, the sample sizes varied from 5083 to 1054 and 5100 to 1050 leaves for Arinda ® and Sante ® cultivars, respectively. Therefore, the optimum sample sizes for the cultivars, with different resistance levels, were not significantly different. According to the calculated stop lines, the sampling must be continued until cumulative number of eggs + larvae reached to 15-16 or 96-101 individuals at precision levels of 0.25 or 0.1, respectively. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans software. The sampling plant provided in this study can be used to obtain a rapid estimate of the pest density with minimal effort.

  17. Influence of tree spatial pattern and sample plot type and size on inventory

    Treesearch

    John-Pascall Berrill; Kevin L. O' Hara

    2012-01-01

    Sampling with different plot types and sizes was simulated using tree location maps and data collected in three even-aged coast redwood (Sequoia sempervirens) stands selected to represent uniform, random, and clumped spatial patterns of tree locations. Fixed-radius circular plots, belt transects, and variable-radius plots were installed by...

  18. Characterizing the size distribution of particles in urban stormwater by use of fixed-point sample-collection methods

    USGS Publications Warehouse

    Selbig, William R.; Bannerman, Roger T.

    2011-01-01

    The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.

  19. A multi-stage drop-the-losers design for multi-arm clinical trials.

    PubMed

    Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher

    2017-02-01

    Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.

  20. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hero, Alfred O.; Rajaratnam, Bala

    When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less

  1. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    PubMed Central

    Hero, Alfred O.; Rajaratnam, Bala

    2015-01-01

    When can reliable inference be drawn in fue “Big Data” context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for “Big Data”. Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks. PMID:27087700

  2. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    DOE PAGES

    Hero, Alfred O.; Rajaratnam, Bala

    2015-12-09

    When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less

  3. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    PubMed

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Demonstration of Multi- and Single-Reader Sample Size Program for Diagnostic Studies software.

    PubMed

    Hillis, Stephen L; Schartz, Kevin M

    2015-02-01

    The recently released software Multi- and Single-Reader Sample Size Sample Size Program for Diagnostic Studies , written by Kevin Schartz and Stephen Hillis, performs sample size computations for diagnostic reader-performance studies. The program computes the sample size needed to detect a specified difference in a reader performance measure between two modalities, when using the analysis methods initially proposed by Dorfman, Berbaum, and Metz (DBM) and Obuchowski and Rockette (OR), and later unified and improved by Hillis and colleagues. A commonly used reader performance measure is the area under the receiver-operating-characteristic curve. The program can be used with typical common reader-performance measures which can be estimated parametrically or nonparametrically. The program has an easy-to-use step-by-step intuitive interface that walks the user through the entry of the needed information. Features of the software include the following: (1) choice of several study designs; (2) choice of inputs obtained from either OR or DBM analyses; (3) choice of three different inference situations: both readers and cases random, readers fixed and cases random, and readers random and cases fixed; (4) choice of two types of hypotheses: equivalence or noninferiority; (6) choice of two output formats: power for specified case and reader sample sizes, or a listing of case-reader combinations that provide a specified power; (7) choice of single or multi-reader analyses; and (8) functionality in Windows, Mac OS, and Linux.

  5. Sampling guidelines for oral fluid-based surveys of group-housed animals.

    PubMed

    Rotolo, Marisa L; Sun, Yaxuan; Wang, Chong; Giménez-Lirola, Luis; Baum, David H; Gauger, Phillip C; Harmon, Karen M; Hoogland, Marlin; Main, Rodger; Zimmerman, Jeffrey J

    2017-09-01

    Formulas and software for calculating sample size for surveys based on individual animal samples are readily available. However, sample size formulas are not available for oral fluids and other aggregate samples that are increasingly used in production settings. Therefore, the objective of this study was to develop sampling guidelines for oral fluid-based porcine reproductive and respiratory syndrome virus (PRRSV) surveys in commercial swine farms. Oral fluid samples were collected in 9 weekly samplings from all pens in 3 barns on one production site beginning shortly after placement of weaned pigs. Samples (n=972) were tested by real-time reverse-transcription PCR (RT-rtPCR) and the binary results analyzed using a piecewise exponential survival model for interval-censored, time-to-event data with misclassification. Thereafter, simulation studies were used to study the barn-level probability of PRRSV detection as a function of sample size, sample allocation (simple random sampling vs fixed spatial sampling), assay diagnostic sensitivity and specificity, and pen-level prevalence. These studies provided estimates of the probability of detection by sample size and within-barn prevalence. Detection using fixed spatial sampling was as good as, or better than, simple random sampling. Sampling multiple barns on a site increased the probability of detection with the number of barns sampled. These results are relevant to PRRSV control or elimination projects at the herd, regional, or national levels, but the results are also broadly applicable to contagious pathogens of swine for which oral fluid tests of equivalent performance are available. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Validation of fixed sample size plans for monitoring lepidopteran pests of Brassica oleracea crops in North Korea.

    PubMed

    Hamilton, A J; Waters, E K; Kim, H J; Pak, W S; Furlong, M J

    2009-06-01

    The combined action of two lepidoteran pests, Plutella xylostella L. (Plutellidae) and Pieris rapae L. (Pieridae),causes significant yield losses in cabbage (Brassica oleracea variety capitata) crops in the Democratic People's Republic of Korea. Integrated pest management (IPM) strategies for these cropping systems are in their infancy, and sampling plans have not yet been developed. We used statistical resampling to assess the performance of fixed sample size plans (ranging from 10 to 50 plants). First, the precision (D = SE/mean) of the plans in estimating the population mean was assessed. There was substantial variation in achieved D for all sample sizes, and sample sizes of at least 20 and 45 plants were required to achieve the acceptable precision level of D < or = 0.3 at least 50 and 75% of the time, respectively. Second, the performance of the plans in classifying the population density relative to an economic threshold (ET) was assessed. To account for the different damage potentials of the two species the ETs were defined in terms of standard insects (SIs), where 1 SI = 1 P. rapae = 5 P. xylostella larvae. The plans were implemented using different economic thresholds (ETs) for the three growth stages of the crop: precupping (1 SI/plant), cupping (0.5 SI/plant), and heading (4 SI/plant). Improvement in the classification certainty with increasing sample sizes could be seen through the increasing steepness of operating characteristic curves. Rather than prescribe a particular plan, we suggest that the results of these analyses be used to inform practitioners of the relative merits of the different sample sizes.

  7. Magnetic hyperthermia in water based ferrofluids: Effects of initial susceptibility and size polydispersity on heating efficiency

    NASA Astrophysics Data System (ADS)

    Lahiri, B. B.; Ranoo, Surojit; Muthukumaran, T.; Philip, John

    2018-04-01

    The effects of initial susceptibility and size polydispersity on magnetic hyperthermia efficiency in two water based ferrofluids containing phosphate and TMAOH coated superparamagnetic Fe3O4 nanoparticles were studied. Experiments were performed at a fixed frequency of 126 kHz on four different concentrations of both samples and under different external field amplitudes. It was observed that for field amplitudes beyond 45.0 kAm-1, the maximum temperature rise was in the vicinity of 42°C (hyperthermia limit) which indicated the suitability of the water based ferrofluids for hyperthermia applications. The maximum temperature rise and specific absorption rate were found to vary linearly with square of the applied field amplitudes, in accordance with theoretical predictions. It was further observed that for a fixed sample concentration, specific absorption rate was higher for the phosphate coated samples which was attributed to the higher initial static susceptibility and lower size polydispersity of phosphate coated Fe3O4.

  8. Sequential sampling: a novel method in farm animal welfare assessment.

    PubMed

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.

  9. Using variance components to estimate power in a hierarchically nested sampling design improving monitoring of larval Devils Hole pupfish

    USGS Publications Warehouse

    Dzul, Maria C.; Dixon, Philip M.; Quist, Michael C.; Dinsomore, Stephen J.; Bower, Michael R.; Wilson, Kevin P.; Gaines, D. Bailey

    2013-01-01

    We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007–2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.

  10. Sampling methods, dispersion patterns, and fixed precision sequential sampling plans for western flower thrips (Thysanoptera: Thripidae) and cotton fleahoppers (Hemiptera: Miridae) in cotton.

    PubMed

    Parajulee, M N; Shrestha, R B; Leser, J F

    2006-04-01

    A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.

  11. Particles size distribution in diluted magnetic fluids

    NASA Astrophysics Data System (ADS)

    Yerin, Constantine V.

    2017-06-01

    Changes in particles and aggregates size distribution in diluted kerosene based magnetic fluids is studied by dynamic light scattering method. It has been found that immediately after dilution in magnetic fluids the system of aggregates with sizes ranging from 100 to 250-1000 nm is formed. In 50-100 h after dilution large aggregates are peptized and in the sample stationary particles and aggregates size distribution is fixed.

  12. The impact of multiple endpoint dependency on Q and I(2) in meta-analysis.

    PubMed

    Thompson, Christopher Glen; Becker, Betsy Jane

    2014-09-01

    A common assumption in meta-analysis is that effect sizes are independent. When correlated effect sizes are analyzed using traditional univariate techniques, this assumption is violated. This research assesses the impact of dependence arising from treatment-control studies with multiple endpoints on homogeneity measures Q and I(2) in scenarios using the unbiased standardized-mean-difference effect size. Univariate and multivariate meta-analysis methods are examined. Conditions included different overall outcome effects, study sample sizes, numbers of studies, between-outcomes correlations, dependency structures, and ways of computing the correlation. The univariate approach used typical fixed-effects analyses whereas the multivariate approach used generalized least-squares (GLS) estimates of a fixed-effects model, weighted by the inverse variance-covariance matrix. Increased dependence among effect sizes led to increased Type I error rates from univariate models. When effect sizes were strongly dependent, error rates were drastically higher than nominal levels regardless of study sample size and number of studies. In contrast, using GLS estimation to account for multiple-endpoint dependency maintained error rates within nominal levels. Conversely, mean I(2) values were not greatly affected by increased amounts of dependency. Last, we point out that the between-outcomes correlation should be estimated as a pooled within-groups correlation rather than using a full-sample estimator that does not consider treatment/control group membership. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Neither fixed nor random: weighted least squares meta-analysis.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2015-06-15

    This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects. Copyright © 2015 John Wiley & Sons, Ltd.

  14. [Sequential sampling plans to Orthezia praelonga Douglas (Hemiptera: Sternorrhyncha, Ortheziidae) in citrus].

    PubMed

    Costa, Marilia G; Barbosa, José C; Yamamoto, Pedro T

    2007-01-01

    The sequential sampling is characterized by using samples of variable sizes, and has the advantage of reducing sampling time and costs if compared to fixed-size sampling. To introduce an adequate management for orthezia, sequential sampling plans were developed for orchards under low and high infestation. Data were collected in Matão, SP, in commercial stands of the orange variety 'Pêra Rio', at five, nine and 15 years of age. Twenty samplings were performed in the whole area of each stand by observing the presence or absence of scales on plants, being plots comprised of ten plants. After observing that in all of the three stands the scale population was distributed according to the contagious model, fitting the Negative Binomial Distribution in most samplings, two sequential sampling plans were constructed according to the Sequential Likelihood Ratio Test (SLRT). To construct these plans an economic threshold of 2% was adopted and the type I and II error probabilities were fixed in alpha = beta = 0.10. Results showed that the maximum numbers of samples expected to determine control need were 172 and 76 samples for stands with low and high infestation, respectively.

  15. A novel sample size formula for the weighted log-rank test under the proportional hazards cure model.

    PubMed

    Xiong, Xiaoping; Wu, Jianrong

    2017-01-01

    The treatment of cancer has progressed dramatically in recent decades, such that it is no longer uncommon to see a cure or log-term survival in a significant proportion of patients with various types of cancer. To adequately account for the cure fraction when designing clinical trials, the cure models should be used. In this article, a sample size formula for the weighted log-rank test is derived under the fixed alternative hypothesis for the proportional hazards cure models. Simulation showed that the proposed sample size formula provides an accurate estimation of sample size for designing clinical trials under the proportional hazards cure models. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Scanning fiber angle-resolved low coherence interferometry

    PubMed Central

    Zhu, Yizheng; Terry, Neil G.; Wax, Adam

    2010-01-01

    We present a fiber-optic probe for Fourier-domain angle-resolved low coherence interferometry for the determination of depth-resolved scatterer size. The probe employs a scanning single-mode fiber to collect the angular scattering distribution of the sample, which is analyzed using the Mie theory to obtain the average size of the scatterers. Depth sectioning is achieved with low coherence Mach–Zehnder interferometry. In the sample arm of the interferometer, a fixed fiber illuminates the sample through an imaging lens and a collection fiber samples the backscattered angular distribution by scanning across the Fourier plane image of the sample. We characterize the optical performance of the probe and demonstrate the ability to execute depth-resolved sizing with subwavelength accuracy by using a double-layer phantom containing two sizes of polystyrene microspheres. PMID:19838271

  17. Isokinetic air sampler

    DOEpatents

    Sehmel, George A.

    1979-01-01

    An isokinetic air sampler includes a filter, a holder for the filter, an air pump for drawing air through the filter at a fixed, predetermined rate, an inlet assembly for the sampler having an inlet opening therein of a size such that isokinetic air sampling is obtained at a particular wind speed, a closure for the inlet opening and means for simultaneously opening the closure and turning on the air pump when the wind speed is such that isokinetic air sampling is obtained. A system incorporating a plurality of such samplers provided with air pumps set to draw air through the filter at the same fixed, predetermined rate and having different inlet opening sizes for use at different wind speeds is included within the ambit of the present invention as is a method of sampling air to measure airborne concentrations of particulate pollutants as a function of wind speed.

  18. Development of a depth-integrated sample arm (DISA) to reduce solids stratification bias in stormwater sampling

    USGS Publications Warehouse

    Selbig, William R.; ,; Roger T. Bannerman,

    2011-01-01

    A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water.

  19. Development of a depth-integrated sample arm to reduce solids stratification bias in stormwater sampling.

    PubMed

    Selbig, William R; Bannerman, Roger T

    2011-04-01

    A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water.

  20. Development of a depth-integrated sample arm to reduce solids stratification bias in stormwater sampling

    USGS Publications Warehouse

    Selbig, W.R.; Bannerman, R.T.

    2011-01-01

    A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water. ?? 2010 Publishing Technology.

  1. Are power calculations useful? A multicentre neuroimaging study

    PubMed Central

    Suckling, John; Henty, Julian; Ecker, Christine; Deoni, Sean C; Lombardo, Michael V; Baron-Cohen, Simon; Jezzard, Peter; Barnes, Anna; Chakrabarti, Bhismadev; Ooi, Cinly; Lai, Meng-Chuan; Williams, Steven C; Murphy, Declan GM; Bullmore, Edward

    2014-01-01

    There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources. PMID:24644267

  2. Are There Differences in Gait Mechanics in Patients With A Fixed Versus Mobile Bearing Total Ankle Arthroplasty? A Randomized Trial.

    PubMed

    Queen, Robin M; Franck, Christopher T; Schmitt, Daniel; Adams, Samuel B

    2017-10-01

    Total ankle arthroplasty (TAA) is an alternative to arthrodesis, but no randomized trial has examined whether a fixed bearing or mobile bearing implant provides improved gait mechanics. We wished to determine if fixed- or mobile-bearing TAA results in a larger improvement in pain scores and gait mechanics from before surgery to 1 year after surgery, and to quantify differences in outcomes using statistical analysis and report the standardized effect sizes for such comparisons. Patients with end-stage ankle arthritis who were scheduled for TAA between November 2011 and June 2013 (n = 40; 16 men, 24 women; average age, 63 years; age range, 35-81 years) were prospectively recruited for this study from a single foot and ankle orthopaedic clinic. During this period, 185 patients underwent TAA, with 144 being eligible to participate in this study. Patients were eligible to participate if they were able to meet all study inclusion criteria, which were: no previous diagnosis of rheumatoid arthritis, a contralateral TAA, bilateral ankle arthritis, previous revision TAA, an ankle fusion revision, or able to walk without the use of an assistive device, weight less than 250 pounds (114 kg), a sagittal or coronal plane deformity less than 15°, no presence of avascular necrosis of the distal tibia, no current neuropathy, age older than 35 years, no history of a talar neck fracture, or an avascular talus. Of the 144 eligible patients, 40 consented to participate in our randomized trial. These 40 patients were randomly assigned to either the fixed (n = 20) or mobile bearing implant group (n = 20). Walking speed, bilateral peak dorsiflexion angle, peak plantar flexion angle, sagittal plane ankle ROM, peak ankle inversion angle, peak plantar flexion moment, peak plantar flexion power during stance, peak weight acceptance, and propulsive vertical ground reaction force were analyzed during seven self-selected speed level walking trials for 33 participants using an eight-camera motion analysis system and four force plates. Seven patients were not included in the analysis owing to cancelled surgery (one from each group) and five were lost to followup (four with fixed bearing and one with mobile bearing implants). A series of effect-size calculations and two-sample t-tests comparing postoperative and preoperative increases in outcome variables between implant types were used to determine the differences in the magnitude of improvement between the two patient cohorts from before surgery to 1 year after surgery. The sample size in this study enabled us to detect a standardized shift of 1.01 SDs between group means with 80% power and a type I error rate of 5% for all outcome variables in the study. This randomized trial did not reveal any differences in outcomes between the two implant types under study at the sample size collected. In addition to these results, effect size analysis suggests that changes in outcome differ between implant types by less than 1 SD. Detection of the largest change score or observed effect (propulsive vertical ground reaction force [Fixed: 0.1 ± 0.1; 0.0-1.0; Mobile: 0.0 ± 0.1; 0.0-0.0; p = 0.0.051]) in this study would require a future trial to enroll 66 patients. However, the smallest change score or observed effect (walking speed [Fixed: 0.2 ± 0.3; 0.1-0.4; Mobile: 0.2 ± 0.3; 0.0-0.3; p = 0.742]) requires a sample size of 2336 to detect a significant difference with 80% power at the observed effect sizes. To our knowledge, this is the first randomized study to report the observed effect size comparing improvements in outcome measures between fixed and mobile bearing implant types. This study was statistically powered to detect large effects and descriptively analyze observed effect sizes. Based on our results there were no statistically or clinically meaningful differences between the fixed and mobile bearing implants when examining gait mechanics and pain 1 year after TAA. Level II, therapeutic study.

  3. Optimal number of features as a function of sample size for various classification rules.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Lowey, James; Suh, Edward; Dougherty, Edward R

    2005-04-15

    Given the joint feature-label distribution, increasing the number of features always results in decreased classification error; however, this is not the case when a classifier is designed via a classification rule from sample data. Typically (but not always), for fixed sample size, the error of a designed classifier decreases and then increases as the number of features grows. The potential downside of using too many features is most critical for small samples, which are commonplace for gene-expression-based classifiers for phenotype discrimination. For fixed sample size and feature-label distribution, the issue is to find an optimal number of features. Since only in rare cases is there a known distribution of the error as a function of the number of features and sample size, this study employs simulation for various feature-label distributions and classification rules, and across a wide range of sample and feature-set sizes. To achieve the desired end, finding the optimal number of features as a function of sample size, it employs massively parallel computation. Seven classifiers are treated: 3-nearest-neighbor, Gaussian kernel, linear support vector machine, polynomial support vector machine, perceptron, regular histogram and linear discriminant analysis. Three Gaussian-based models are considered: linear, nonlinear and bimodal. In addition, real patient data from a large breast-cancer study is considered. To mitigate the combinatorial search for finding optimal feature sets, and to model the situation in which subsets of genes are co-regulated and correlation is internal to these subsets, we assume that the covariance matrix of the features is blocked, with each block corresponding to a group of correlated features. Altogether there are a large number of error surfaces for the many cases. These are provided in full on a companion website, which is meant to serve as resource for those working with small-sample classification. For the companion website, please visit http://public.tgen.org/tamu/ofs/ e-dougherty@ee.tamu.edu.

  4. DEVELOPMENT OF AN RH -DENUDED MIE ACTIVE SAMPLING SYSTEM AND TARGETED AEROSOL CALIBRATION

    EPA Science Inventory

    The MIE pDR 1200 nephelometer provides time resolved aerosol concentrations during personal and fixed-site sampling. Active (pumped) operation allows defining an upper PM2.5 particle size, however, this dramatically increases the aerosol mass passing through the phot...

  5. 15N in tree rings as a bio-indicator of changing nitrogen cycling in tropical forests: an evaluation at three sites using two sampling methods

    PubMed Central

    van der Sleen, Peter; Vlam, Mart; Groenendijk, Peter; Anten, Niels P. R.; Bongers, Frans; Bunyavejchewin, Sarayudh; Hietz, Peter; Pons, Thijs L.; Zuidema, Pieter A.

    2015-01-01

    Anthropogenic nitrogen deposition is currently causing a more than twofold increase of reactive nitrogen input over large areas in the tropics. Elevated 15N abundance (δ15N) in the growth rings of some tropical trees has been hypothesized to reflect an increased leaching of 15N-depleted nitrate from the soil, following anthropogenic nitrogen deposition over the last decades. To find further evidence for altered nitrogen cycling in tropical forests, we measured long-term δ15N values in trees from Bolivia, Cameroon, and Thailand. We used two different sampling methods. In the first, wood samples were taken in a conventional way: from the pith to the bark across the stem of 28 large trees (the “radial” method). In the second, δ15N values were compared across a fixed diameter (the “fixed-diameter” method). We sampled 400 trees that differed widely in size, but measured δ15N in the stem around the same diameter (20 cm dbh) in all trees. As a result, the growth rings formed around this diameter differed in age and allowed a comparison of δ15N values over time with an explicit control for potential size-effects on δ15N values. We found a significant increase of tree-ring δ15N across the stem radius of large trees from Bolivia and Cameroon, but no change in tree-ring δ15N values over time was found in any of the study sites when controlling for tree size. This suggests that radial trends of δ15N values within trees reflect tree ontogeny (size development). However, for the trees from Cameroon and Thailand, a low statistical power in the fixed-diameter method prevents to conclude this with high certainty. For the trees from Bolivia, statistical power in the fixed-diameter method was high, showing that the temporal trend in tree-ring δ15N values in the radial method is primarily caused by tree ontogeny and unlikely by a change in nitrogen cycling. We therefore stress to account for tree size before tree-ring δ15N values can be properly interpreted. PMID:25914707

  6. (15)N in tree rings as a bio-indicator of changing nitrogen cycling in tropical forests: an evaluation at three sites using two sampling methods.

    PubMed

    van der Sleen, Peter; Vlam, Mart; Groenendijk, Peter; Anten, Niels P R; Bongers, Frans; Bunyavejchewin, Sarayudh; Hietz, Peter; Pons, Thijs L; Zuidema, Pieter A

    2015-01-01

    Anthropogenic nitrogen deposition is currently causing a more than twofold increase of reactive nitrogen input over large areas in the tropics. Elevated (15)N abundance (δ(15)N) in the growth rings of some tropical trees has been hypothesized to reflect an increased leaching of (15)N-depleted nitrate from the soil, following anthropogenic nitrogen deposition over the last decades. To find further evidence for altered nitrogen cycling in tropical forests, we measured long-term δ(15)N values in trees from Bolivia, Cameroon, and Thailand. We used two different sampling methods. In the first, wood samples were taken in a conventional way: from the pith to the bark across the stem of 28 large trees (the "radial" method). In the second, δ(15)N values were compared across a fixed diameter (the "fixed-diameter" method). We sampled 400 trees that differed widely in size, but measured δ(15)N in the stem around the same diameter (20 cm dbh) in all trees. As a result, the growth rings formed around this diameter differed in age and allowed a comparison of δ(15)N values over time with an explicit control for potential size-effects on δ(15)N values. We found a significant increase of tree-ring δ(15)N across the stem radius of large trees from Bolivia and Cameroon, but no change in tree-ring δ(15)N values over time was found in any of the study sites when controlling for tree size. This suggests that radial trends of δ(15)N values within trees reflect tree ontogeny (size development). However, for the trees from Cameroon and Thailand, a low statistical power in the fixed-diameter method prevents to conclude this with high certainty. For the trees from Bolivia, statistical power in the fixed-diameter method was high, showing that the temporal trend in tree-ring δ(15)N values in the radial method is primarily caused by tree ontogeny and unlikely by a change in nitrogen cycling. We therefore stress to account for tree size before tree-ring δ(15)N values can be properly interpreted.

  7. Probability of coincidental similarity among the orbits of small bodies - I. Pairing

    NASA Astrophysics Data System (ADS)

    Jopek, Tadeusz Jan; Bronikowska, Małgorzata

    2017-09-01

    Probability of coincidental clustering among orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4,… members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We have tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample can be used to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.

  8. Regression modeling of particle size distributions in urban storm water: advancements through improved sample collection methods

    USGS Publications Warehouse

    Fienen, Michael N.; Selbig, William R.

    2012-01-01

    A new sample collection system was developed to improve the representation of sediment entrained in urban storm water by integrating water quality samples from the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of suspended-sediment concentration and particle size distribution at three independent study locations. Use of the DISA decreased variability, which improved statistical regression to predict particle size distribution using surrogate environmental parameters, such as precipitation depth and intensity. The performance of this statistical modeling technique was compared to results using traditional fixed-point sampling methods and was found to perform better. When environmental parameters can be used to predict particle size distributions, environmental managers have more options when characterizing concentrations, loads, and particle size distributions in urban runoff.

  9. Simulation of parametric model towards the fixed covariate of right censored lung cancer data

    NASA Astrophysics Data System (ADS)

    Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Ridwan Olaniran, Oyebayo; Enera Amran, Syahila

    2017-09-01

    In this study, simulation procedure was applied to measure the fixed covariate of right censored data by using parametric survival model. The scale and shape parameter were modified to differentiate the analysis of parametric regression survival model. Statistically, the biases, mean biases and the coverage probability were used in this analysis. Consequently, different sample sizes were employed to distinguish the impact of parametric regression model towards right censored data with 50, 100, 150 and 200 number of sample. R-statistical software was utilised to develop the coding simulation with right censored data. Besides, the final model of right censored simulation was compared with the right censored lung cancer data in Malaysia. It was found that different values of shape and scale parameter with different sample size, help to improve the simulation strategy for right censored data and Weibull regression survival model is suitable fit towards the simulation of survival of lung cancer patients data in Malaysia.

  10. Fixed-Precision Sequential Sampling Plans for Estimating Alfalfa Caterpillar, Colias lesbia, Egg Density in Alfalfa, Medicago sativa, Fields in Córdoba, Argentina

    PubMed Central

    Serra, Gerardo V.; Porta, Norma C. La; Avalos, Susana; Mazzuferi, Vilma

    2013-01-01

    The alfalfa caterpillar, Colias lesbia (Fabricius) (Lepidoptera: Pieridae), is a major pest of alfalfa, Medicago sativa L. (Fabales: Fabaceae), crops in Argentina. Its management is based mainly on chemical control of larvae whenever the larvae exceed the action threshold. To develop and validate fixed-precision sequential sampling plans, an intensive sampling programme for C. lesbia eggs was carried out in two alfalfa plots located in the Province of Córdoba, Argentina, from 1999 to 2002. Using Resampling for Validation of Sampling Plans software, 12 additional independent data sets were used to validate the sequential sampling plan with precision levels of 0.10 and 0.25 (SE/mean), respectively. For a range of mean densities of 0.10 to 8.35 eggs/sample, an average sample size of only 27 and 26 sample units was required to achieve a desired precision level of 0.25 for the sampling plans of Green and Kuno, respectively. As the precision level was increased to 0.10, average sample size increased to 161 and 157 sample units for the sampling plans of Green and Kuno, respectively. We recommend using Green's sequential sampling plan because it is less sensitive to changes in egg density. These sampling plans are a valuable tool for researchers to study population dynamics and to evaluate integrated pest management strategies. PMID:23909840

  11. Trade off between variable and fixed size normalization in orthogonal polynomials based iris recognition system.

    PubMed

    Krishnamoorthi, R; Anna Poorani, G

    2016-01-01

    Iris normalization is an important stage in any iris biometric, as it has a propensity to trim down the consequences of iris distortion. To indemnify the variation in size of the iris owing to the action of stretching or enlarging the pupil in iris acquisition process and camera to eyeball distance, two normalization schemes has been proposed in this work. In the first method, the iris region of interest is normalized by converting the iris into the variable size rectangular model in order to avoid the under samples near the limbus border. In the second method, the iris region of interest is normalized by converting the iris region into a fixed size rectangular model in order to avoid the dimensional discrepancies between the eye images. The performance of the proposed normalization methods is evaluated with orthogonal polynomials based iris recognition in terms of FAR, FRR, GAR, CRR and EER.

  12. 75 FR 12175 - Application(s) for Duty-Free Entry of Scientific Instruments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-15

    ..., micro and nano-sized phenomena from a variety of sources. The samples will be fixed, sectioned and attached to grids to be viewed in the instrument. Justification for Duty-Free Entry: There are no domestic...

  13. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Fixation and chemical analysis of single fog and rain droplets

    NASA Astrophysics Data System (ADS)

    Kasahara, M.; Akashi, S.; Ma, C.-J.; Tohno, S.

    Last decade, the importance of global environmental problems has been recognized worldwide. Acid rain is one of the most important global environmental problems as well as the global warming. The grasp of physical and chemical properties of fog and rain droplets is essential to make clear the physical and chemical processes of acid rain and also their effects on forests, materials and ecosystems. We examined the physical and chemical properties of single fog and raindrops by applying fixation technique. The sampling method and treatment procedure to fix the liquid droplets as a solid particle were investigated. Small liquid particles like fog droplet could be easily fixed within few minutes by exposure to cyanoacrylate vapor. The large liquid particles like raindrops were also fixed successively, but some of them were not perfect. Freezing method was applied to fix the large raindrops. Frozen liquid particles existed stably by exposure to cyanoacrylate vapor after freezing. The particle size measurement and the elemental analysis of the fixed particle were performed in individual base using microscope, and SEX-EDX, particle-induced X-ray emission (PIXE) and micro-PIXE analyses, respectively. The concentration in raindrops was dependent upon the droplet size and the elapsed time from the beginning of rainfall.

  15. The quantitative LOD score: test statistic and sample size for exclusion and linkage of quantitative traits in human sibships.

    PubMed

    Page, G P; Amos, C I; Boerwinkle, E

    1998-04-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.

  16. Spatial distribution of nymphs of Scaphoideus titanus (Homoptera: Cicadellidae) in grapes, and evaluation of sequential sampling plans.

    PubMed

    Lessio, Federico; Alma, Alberto

    2006-04-01

    The spatial distribution of the nymphs of Scaphoideus titanus Ball (Homoptera Cicadellidae), the vector of grapevine flavescence dorée (Candidatus Phytoplasma vitis, 16Sr-V), was studied by applying Taylor's power law. Studies were conducted from 2002 to 2005, in organic and conventional vineyards of Piedmont, northern Italy. Minimum sample size and fixed precision level stop lines were calculated to develop appropriate sampling plans. Model validation was performed, using independent field data, by means of Resampling Validation of Sample Plans (RVSP) resampling software. The nymphal distribution, analyzed via Taylor's power law, was aggregated, with b = 1.49. A sample of 32 plants was adequate at low pest densities with a precision level of D0 = 0.30; but for a more accurate estimate (D0 = 0.10), the required sample size needs to be 292 plants. Green's fixed precision level stop lines seem to be more suitable for field sampling: RVSP simulations of this sampling plan showed precision levels very close to the desired levels. However, at a prefixed precision level of 0.10, sampling would become too time-consuming, whereas a precision level of 0.25 is easily achievable. How these results could influence the correct application of the compulsory control of S. titanus and Flavescence dorée in Italy is discussed.

  17. Group-sequential three-arm noninferiority clinical trial designs

    PubMed Central

    Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko

    2016-01-01

    We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481

  18. Detection probability in aerial surveys of feral horses

    USGS Publications Warehouse

    Ransom, Jason I.

    2011-01-01

    Observation bias pervades data collected during aerial surveys of large animals, and although some sources can be mitigated with informed planning, others must be addressed using valid sampling techniques that carefully model detection probability. Nonetheless, aerial surveys are frequently employed to count large mammals without applying such methods to account for heterogeneity in visibility of animal groups on the landscape. This often leaves managers and interest groups at odds over decisions that are not adequately informed. I analyzed detection of feral horse (Equus caballus) groups by dual independent observers from 24 fixed-wing and 16 helicopter flights using mixed-effect logistic regression models to investigate potential sources of observation bias. I accounted for observer skill, population location, and aircraft type in the model structure and analyzed the effects of group size, sun effect (position related to observer), vegetation type, topography, cloud cover, percent snow cover, and observer fatigue on detection of horse groups. The most important model-averaged effects for both fixed-wing and helicopter surveys included group size (fixed-wing: odds ratio = 0.891, 95% CI = 0.850–0.935; helicopter: odds ratio = 0.640, 95% CI = 0.587–0.698) and sun effect (fixed-wing: odds ratio = 0.632, 95% CI = 0.350–1.141; helicopter: odds ratio = 0.194, 95% CI = 0.080–0.470). Observer fatigue was also an important effect in the best model for helicopter surveys, with detection probability declining after 3 hr of survey time (odds ratio = 0.278, 95% CI = 0.144–0.537). Biases arising from sun effect and observer fatigue can be mitigated by pre-flight survey design. Other sources of bias, such as those arising from group size, topography, and vegetation can only be addressed by employing valid sampling techniques such as double sampling, mark–resight (batch-marked animals), mark–recapture (uniquely marked and identifiable animals), sightability bias correction models, and line transect distance sampling; however, some of these techniques may still only partially correct for negative observation biases.

  19. Optimizing variable radius plot size and LiDAR resolution to model standing volume in conifer forests

    Treesearch

    Ram Kumar Deo; Robert E. Froese; Michael J. Falkowski; Andrew T. Hudak

    2016-01-01

    The conventional approach to LiDAR-based forest inventory modeling depends on field sample data from fixed-radius plots (FRP). Because FRP sampling is cost intensive, combining variable-radius plot (VRP) sampling and LiDAR data has the potential to improve inventory efficiency. The overarching goal of this study was to evaluate the integration of LiDAR and VRP data....

  20. Are fixed grain size ratios useful proxies for loess sedimentation dynamics? Experiences from Remizovka, Kazakhstan

    NASA Astrophysics Data System (ADS)

    Schulte, Philipp; Sprafke, Tobias; Rodrigues, Leonor; Fitzsimmons, Kathryn E.

    2018-04-01

    Loess-paleosol sequences (LPS) are sensitive terrestrial archives of past aeolian dynamics and paleoclimatic changes within the Quaternary. Grain size (GS) analysis is commonly used to interpret aeolian dynamics and climate influences on LPS, based on granulometric parameters such as specific GS classes, ratios of GS classes and statistical manipulation of GS data. However, the GS distribution of a loess sample is not solely a function of aeolian dynamics; rather complex polygenetic depositional and post-depositional processes must be taken into account. This study assesses the reliability of fixed GS ratios as proxies for past sedimentation dynamics using the case study of Remizovka in southeast Kazakhstan. Continuous sampling of the upper 8 m of the profile, which shows extremely weak pedogenic alteration and is therefore dominated by primary aeolian activity, indicates that fixed GS ratios do not adequately serve as proxies for loess sedimentation dynamics. We find through the calculation of single value parameters, that "true" variations within sensitive GS classes are masked by relative changes of the more frequent classes. Heatmap signatures provide the visualization of GS variability within LPS without significant data loss within the measured classes of a sample, or across all measured samples. We also examine the effect of two different commonly used laser diffraction devices on GS ratio calculation by duplicate measurements, the Beckman Coulter (LS13320) and a Malvern Mastersizer Hydro (MM2000), as well as the applicability and significance of the so-called "twin peak ratio" previously developed on samples from the same section. The LS13320 provides higher resolution results than the MM2000, nevertheless the GS ratios related to variations in the silt-sized fraction were comparable. However, we could not detect a twin peak within the coarse silt as detected in the original study using the same device. Our GS measurements differ from previous works at Remizovka in several instances, calling into question the interpretation of paleoclimatic implications using GS data alone.

  1. Random versus fixed-site sampling when monitoring relative abundance of fishes in headwater streams of the upper Colorado River basin

    USGS Publications Warehouse

    Quist, M.C.; Gerow, K.G.; Bower, M.R.; Hubert, W.A.

    2006-01-01

    Native fishes of the upper Colorado River basin (UCRB) have declined in distribution and abundance due to habitat degradation and interactions with normative fishes. Consequently, monitoring populations of both native and nonnative fishes is important for conservation of native species. We used data collected from Muddy Creek, Wyoming (2003-2004), to compare sample size estimates using a random and a fixed-site sampling design to monitor changes in catch per unit effort (CPUE) of native bluehead suckers Catostomus discobolus, flannelmouth suckers C. latipinnis, roundtail chub Gila robusta, and speckled dace Rhinichthys osculus, as well as nonnative creek chub Semotilus atromaculatus and white suckers C. commersonii. When one-pass backpack electrofishing was used, detection of 10% or 25% changes in CPUE (fish/100 m) at 60% statistical power required 50-1,000 randomly sampled reaches among species regardless of sampling design. However, use of a fixed-site sampling design with 25-50 reaches greatly enhanced the ability to detect changes in CPUE. The addition of seining did not appreciably reduce required effort. When detection of 25-50% changes in CPUE of native and nonnative fishes is acceptable, we recommend establishment of 25-50 fixed reaches sampled by one-pass electrofishing in Muddy Creek. Because Muddy Creek has habitat and fish assemblages characteristic of other headwater streams in the UCRB, our results are likely to apply to many other streams in the basin. ?? Copyright by the American Fisheries Society 2006.

  2. The more the heavier? Family size and childhood obesity in the U.S.

    PubMed

    Datar, Ashlesha

    2017-05-01

    Childhood obesity remains a top public health concern and understanding its drivers is important for combating this epidemic. Contemporaneous trends in declining family size and increasing childhood obesity in the U.S. suggest that family size may be a potential contributor, but limited evidence exists. Using data from a national sample of children in the U.S. this study examines whether family size, measured by the number of siblings a child has, is associated with child BMI and obesity, and the possible mechanisms at work. The potential endogeneity of family size is addressed by using several complementary approaches including sequentially introducing of a rich set of controls, subgroup analyses, and estimating school fixed-effects and child fixed-effects models. Results suggest that having more siblings is associated with significantly lower BMI and lower likelihood of obesity. Children with siblings have healthier diets and watch less television. Family mealtimes, less eating out, reduced maternal work, and increased adult supervision of children are potential mechanisms through which family size is protective of childhood obesity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Tree Data (TD)

    Treesearch

    Robert E. Keane

    2006-01-01

    The Tree Data (TD) methods are used to sample individual live and dead trees on a fixed-area plot to estimate tree density, size, and age class distributions before and after fire in order to assess tree survival and mortality rates. This method can also be used to sample individual shrubs if they are over 4.5 ft tall. When trees are larger than the user-specified...

  4. Repeated significance tests of linear combinations of sensitivity and specificity of a diagnostic biomarker

    PubMed Central

    Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi

    2016-01-01

    A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768

  5. Quantifying and Mitigating the Effect of Preferential Sampling on Phylodynamic Inference

    PubMed Central

    Karcher, Michael D.; Palacios, Julia A.; Bedford, Trevor; Suchard, Marc A.; Minin, Vladimir N.

    2016-01-01

    Phylodynamics seeks to estimate effective population size fluctuations from molecular sequences of individuals sampled from a population of interest. One way to accomplish this task formulates an observed sequence data likelihood exploiting a coalescent model for the sampled individuals’ genealogy and then integrating over all possible genealogies via Monte Carlo or, less efficiently, by conditioning on one genealogy estimated from the sequence data. However, when analyzing sequences sampled serially through time, current methods implicitly assume either that sampling times are fixed deterministically by the data collection protocol or that their distribution does not depend on the size of the population. Through simulation, we first show that, when sampling times do probabilistically depend on effective population size, estimation methods may be systematically biased. To correct for this deficiency, we propose a new model that explicitly accounts for preferential sampling by modeling the sampling times as an inhomogeneous Poisson process dependent on effective population size. We demonstrate that in the presence of preferential sampling our new model not only reduces bias, but also improves estimation precision. Finally, we compare the performance of the currently used phylodynamic methods with our proposed model through clinically-relevant, seasonal human influenza examples. PMID:26938243

  6. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    USGS Publications Warehouse

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  7. Sampling design and procedures for fixed surface-water sites in the Georgia-Florida coastal plain study unit, 1993

    USGS Publications Warehouse

    Hatzell, H.H.; Oaksford, E.T.; Asbury, C.E.

    1995-01-01

    The implementation of design guidelines for the National Water-Quality Assessment (NAWQA) Program has resulted in the development of new sampling procedures and the modification of existing procedures commonly used in the Water Resources Division of the U.S. Geological Survey. The Georgia-Florida Coastal Plain (GAFL) study unit began the intensive data collection phase of the program in October 1992. This report documents the implementation of the NAWQA guidelines by describing the sampling design and procedures for collecting surface-water samples in the GAFL study unit in 1993. This documentation is provided for agencies that use water-quality data and for future study units that will be entering the intensive phase of data collection. The sampling design is intended to account for large- and small-scale spatial variations, and temporal variations in water quality for the study area. Nine fixed sites were selected in drainage basins of different sizes and different land-use characteristics located in different land-resource provinces. Each of the nine fixed sites was sampled regularly for a combination of six constituent groups composed of physical and chemical constituents: field measurements, major ions and metals, nutrients, organic carbon, pesticides, and suspended sediments. Some sites were also sampled during high-flow conditions and storm events. Discussion of the sampling procedure is divided into three phases: sample collection, sample splitting, and sample processing. A cone splitter was used to split water samples for the analysis of the sampling constituent groups except organic carbon from approximately nine liters of stream water collected at four fixed sites that were sampled intensively. An example of the sample splitting schemes designed to provide the sample volumes required for each sample constituent group is described in detail. Information about onsite sample processing has been organized into a flowchart that describes a pathway for each of the constituent groups.

  8. Young Women’s Dynamic Family Size Preferences in the Context of Transitioning Fertility

    PubMed Central

    Yeatman, Sara; Sennott, Christie; Culpepper, Steven

    2013-01-01

    Dynamic theories of family size preferences posit that they are not a fixed and stable goal but rather are akin to a moving target that changes within individuals over time. Nonetheless, in high-fertility contexts, changes in family size preferences tend to be attributed to low construct validity and measurement error instead of genuine revisions in preferences. To address the appropriateness of this incongruity, the present study examines evidence for the sequential model of fertility among a sample of young Malawian women living in a context of transitioning fertility. Using eight waves of closely spaced data and fixed-effects models, we find that these women frequently change their reported family size preferences and that these changes are often associated with changes in their relationship and reproductive circumstances. The predictability of change gives credence to the argument that ideal family size is a meaningful construct, even in this higher-fertility setting. Changes are not equally predictable across all women, however, and gamma regression results demonstrate that women for whom reproduction is a more distant goal change their fertility preferences in less-predictable ways. PMID:23619999

  9. Young women's dynamic family size preferences in the context of transitioning fertility.

    PubMed

    Yeatman, Sara; Sennott, Christie; Culpepper, Steven

    2013-10-01

    Dynamic theories of family size preferences posit that they are not a fixed and stable goal but rather are akin to a moving target that changes within individuals over time. Nonetheless, in high-fertility contexts, changes in family size preferences tend to be attributed to low construct validity and measurement error instead of genuine revisions in preferences. To address the appropriateness of this incongruity, the present study examines evidence for the sequential model of fertility among a sample of young Malawian women living in a context of transitioning fertility. Using eight waves of closely spaced data and fixed-effects models, we find that these women frequently change their reported family size preferences and that these changes are often associated with changes in their relationship and reproductive circumstances. The predictability of change gives credence to the argument that ideal family size is a meaningful construct, even in this higher-fertility setting. Changes are not equally predictable across all women, however, and gamma regression results demonstrate that women for whom reproduction is a more distant goal change their fertility preferences in less-predictable ways.

  10. RT-PCR analysis of RNA extracted from Bouin-fixed and paraffin-embedded lymphoid tissues.

    PubMed

    Gloghini, Annunziata; Canal, Barbara; Klein, Ulf; Dal Maso, Luigino; Perin, Tiziana; Dalla-Favera, Riccardo; Carbone, Antonino

    2004-11-01

    In the present study, we have investigated whether RNA can be efficiently isolated from Bouin-fixed or formalin-fixed, paraffin-embedded lymphoid tissue specimens. To this aim, we applied a new and simple method that includes the combination of proteinase K digestion and column purification. By this method, we demonstrated that the amplification of long fragments could be accomplished after a pre-heating step before cDNA synthesis associated with the use of enzymes that work at high temperature. By means of PCR using different primers for two examined genes (glyceraldehyde-3-phosphate dehydrogenase [GAPDH]- and CD40), we amplified segments of cDNA obtained by reverse transcription of the isolated RNA extracted from Bouin-fixed or formalin-fixed paraffin-embedded tissues. Amplified fragments of the expected sizes were obtained for both genes tested indicating that this method is suitable for the isolation of high-quality RNA. To explore the possibility for giving accurate real time quantitative RT-PCR results, cDNA obtained from matched frozen, Bouin-fixed and formalin-fixed neoplastic samples (two diffuse large cell lymphomas, one plasmacytoma) was tested for the following target genes: CD40, Aquaporin-3, BLIMP1, IRF4, Syndecan-1. Delta threshold cycle (DeltaC(T)) values for Bouin-fixed and formalin-fixed paraffin-embedded tissues and their correlation with those for frozen samples showed an extremely high correlation (r > 0.90) for all of the tested genes. These results show that the method of RNA extraction we propose is suitable for giving accurate real time quantitative RT-PCR results.

  11. Optical Tweezers for Sample Fixing in Micro-Diffraction Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amenitsch, H.; Rappolt, M.; Sartori, B.

    2007-01-19

    In order to manipulate, characterize and measure the micro-diffraction of individual structural elements down to single phospholipid liposomes we have been using optical tweezers (OT) combined with an imaging microscope. We were able to install the OT system at the microfocus beamline ID13 at the ESRF and trap clusters of about 50 multi-lamellar liposomes (< 10 {mu}m large cluster). Further we have performed a scanning diffraction experiment with a 1 micrometer beam to demonstrate the fixing capabilities and to confirm the size of the liposome cluster by X-ray diffraction.

  12. Rate-Compatible LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel

    2009-01-01

    A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation

  13. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    PubMed

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Extraction of citral oil from lemongrass (Cymbopogon Citratus) by steam-water distillation technique

    NASA Astrophysics Data System (ADS)

    Alam, P. N.; Husin, H.; Asnawi, T. M.; Adisalamun

    2018-04-01

    In Indonesia, production of citral oil from lemon grass (Cymbopogon Cytratus) is done by a traditional technique whereby a low yield results. To improve the yield, an appropriate extraction technology is required. In this research, a steam-water distillation technique was applied to extract the essential oil from the lemongrass. The effects of sample particle size and bed volume on yield and quality of citral oil produced were investigated. The drying and refining time of 2 hours were used as fixed variables. This research results that minimum citral oil yield of 0.53% was obtained on sample particle size of 3 cm and bed volume of 80%, whereas the maximum yield of 1.95% on sample particle size of 15 cm and bed volume of 40%. The lowest specific gravity of 0.80 and the highest specific gravity of 0.905 were obtained on sample particle size of 8 cm with bed volume of 80% and particle size of 12 cm with bed volume of 70%, respectively. The lowest refractive index of 1.480 and the highest refractive index of 1.495 were obtained on sample particle size of 8 cm with bed volume of 70% and sample particle size of 15 cm with bed volume of 40%, respectively. The solubility of the produced citral oil in alcohol was 70% in ratio of 1:1, and the citral oil concentration obtained was around 79%.

  15. POWER AND SAMPLE SIZE CALCULATIONS FOR LINEAR HYPOTHESES ASSOCIATED WITH MIXTURES OF MANY COMPONENTS USING FIXED-RATIO RAY DESIGNS

    EPA Science Inventory

    Response surface methodology, often supported by factorial designs, is the classical experimental approach that is widely accepted for detecting and characterizing interactions among chemicals in a mixture. In an effort to reduce the experimental effort as the number of compound...

  16. Continuous Time Level Crossing Sampling ADC for Bio-Potential Recording Systems

    PubMed Central

    Tang, Wei; Osman, Ahmad; Kim, Dongsoo; Goldstein, Brian; Huang, Chenxi; Martini, Berin; Pieribone, Vincent A.

    2013-01-01

    In this paper we present a fixed window level crossing sampling analog to digital convertor for bio-potential recording sensors. This is the first proposed and fully implemented fixed window level crossing ADC without local DACs and clocks. The circuit is designed to reduce data size, power, and silicon area in future wireless neurophysiological sensor systems. We built a testing system to measure bio-potential signals and used it to evaluate the performance of the circuit. The bio-potential amplifier offers a gain of 53 dB within a bandwidth of 200 Hz-20 kHz. The input-referred rms noise is 2.8 µV. In the asynchronous level crossing ADC, the minimum delta resolution is 4 mV. The input signal frequency of the ADC is up to 5 kHz. The system was fabricated using the AMI 0.5 µm CMOS process. The chip size is 1.5 mm by 1.5 mm. The power consumption of the 4-channel system from a 3.3 V supply is 118.8 µW in the static state and 501.6 µW with a 240 kS/s sampling rate. The conversion efficiency is 1.6 nJ/conversion. PMID:24163640

  17. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    PubMed

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  18. You Cannot Step Into the Same River Twice: When Power Analyses Are Optimistic.

    PubMed

    McShane, Blakeley B; Böckenholt, Ulf

    2014-11-01

    Statistical power depends on the size of the effect of interest. However, effect sizes are rarely fixed in psychological research: Study design choices, such as the operationalization of the dependent variable or the treatment manipulation, the social context, the subject pool, or the time of day, typically cause systematic variation in the effect size. Ignoring this between-study variation, as standard power formulae do, results in assessments of power that are too optimistic. Consequently, when researchers attempting replication set sample sizes using these formulae, their studies will be underpowered and will thus fail at a greater than expected rate. We illustrate this with both hypothetical examples and data on several well-studied phenomena in psychology. We provide formulae that account for between-study variation and suggest that researchers set sample sizes with respect to our generally more conservative formulae. Our formulae generalize to settings in which there are multiple effects of interest. We also introduce an easy-to-use website that implements our approach to setting sample sizes. Finally, we conclude with recommendations for quantifying between-study variation. © The Author(s) 2014.

  19. The relationship between national-level carbon dioxide emissions and population size: an assessment of regional and temporal variation, 1960-2005.

    PubMed

    Jorgenson, Andrew K; Clark, Brett

    2013-01-01

    This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region.

  20. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  1. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Demonstration of Numerical Equivalence of Ensemble and Spectral Averaging in Electromagnetic Scattering by Random Particulate Media

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Zakharova, Nadezhda T.

    2016-01-01

    The numerically exact superposition T-matrix method is used to model far-field electromagnetic scattering by two types of particulate object. Object 1 is a fixed configuration which consists of N identical spherical particles (with N 200 or 400) quasi-randomly populating a spherical volume V having a median size parameter of 50. Object 2 is a true discrete random medium (DRM) comprising the same number N of particles randomly moving throughout V. The median particle size parameter is fixed at 4. We show that if Object 1 is illuminated by a quasi-monochromatic parallel beam then it generates a typical speckle pattern having no resemblance to the scattering pattern generated by Object 2. However, if Object 1 is illuminated by a parallel polychromatic beam with a 10 bandwidth then it generates a scattering pattern that is largely devoid of speckles and closely reproduces the quasi-monochromatic pattern generated by Object 2. This result serves to illustrate the capacity of the concept of electromagnetic scattering by a DRM to encompass fixed quasi-random particulate samples provided that they are illuminated by polychromatic light.

  3. Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.

    PubMed

    Obuchowski, Nancy A; Bullen, Jennifer

    2017-01-01

    Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.

  4. Validation of two-dimensional and three-dimensional measurements of subpleural alveolar size parameters by optical coherence tomography

    PubMed Central

    Warger, William C.; Hostens, Jeroen; Namati, Eman; Birngruber, Reginald; Bouma, Brett E.; Tearney, Guillermo J.

    2012-01-01

    Abstract. Optical coherence tomography (OCT) has been increasingly used for imaging pulmonary alveoli. Only a few studies, however, have quantified individual alveolar areas, and the validity of alveolar volumes represented within OCT images has not been shown. To validate quantitative measurements of alveoli from OCT images, we compared the cross-sectional area, perimeter, volume, and surface area of matched subpleural alveoli from microcomputed tomography (micro-CT) and OCT images of fixed air-filled swine samples. The relative change in size between different alveoli was extremely well correlated (r>0.9, P<0.0001), but OCT images underestimated absolute sizes compared to micro-CT by 27% (area), 7% (perimeter), 46% (volume), and 25% (surface area) on average. We hypothesized that the differences resulted from refraction at the tissue–air interfaces and developed a ray-tracing model that approximates the reconstructed alveolar size within OCT images. Using this model and OCT measurements of the refractive index for lung tissue (1.41 for fresh, 1.53 for fixed), we derived equations to obtain absolute size measurements of superellipse and circular alveoli with the use of predictive correction factors. These methods and results should enable the quantification of alveolar sizes from OCT images in vivo. PMID:23235834

  5. Validation of two-dimensional and three-dimensional measurements of subpleural alveolar size parameters by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Unglert, Carolin I.; Warger, William C.; Hostens, Jeroen; Namati, Eman; Birngruber, Reginald; Bouma, Brett E.; Tearney, Guillermo J.

    2012-12-01

    Optical coherence tomography (OCT) has been increasingly used for imaging pulmonary alveoli. Only a few studies, however, have quantified individual alveolar areas, and the validity of alveolar volumes represented within OCT images has not been shown. To validate quantitative measurements of alveoli from OCT images, we compared the cross-sectional area, perimeter, volume, and surface area of matched subpleural alveoli from microcomputed tomography (micro-CT) and OCT images of fixed air-filled swine samples. The relative change in size between different alveoli was extremely well correlated (r>0.9, P<0.0001), but OCT images underestimated absolute sizes compared to micro-CT by 27% (area), 7% (perimeter), 46% (volume), and 25% (surface area) on average. We hypothesized that the differences resulted from refraction at the tissue-air interfaces and developed a ray-tracing model that approximates the reconstructed alveolar size within OCT images. Using this model and OCT measurements of the refractive index for lung tissue (1.41 for fresh, 1.53 for fixed), we derived equations to obtain absolute size measurements of superellipse and circular alveoli with the use of predictive correction factors. These methods and results should enable the quantification of alveolar sizes from OCT images in vivo.

  6. Improved Time-Lapsed Angular Scattering Microscopy of Single Cells

    NASA Astrophysics Data System (ADS)

    Cannaday, Ashley E.

    By measuring angular scattering patterns from biological samples and fitting them with a Mie theory model, one can estimate the organelle size distribution within many cells. Quantitative organelle sizing of ensembles of cells using this method has been well established. Our goal is to develop the methodology to extend this approach to the single cell level, measuring the angular scattering at multiple time points and estimating the non-nuclear organelle size distribution parameters. The diameters of individual organelle-size beads were successfully extracted using scattering measurements with a minimum deflection angle of 20 degrees. However, the accuracy of size estimates can be limited by the angular range detected. In particular, simulations by our group suggest that, for cell organelle populations with a broader size distribution, the accuracy of size prediction improves substantially if the minimum angle of detection angle is 15 degrees or less. The system was therefore modified to collect scattering angles down to 10 degrees. To confirm experimentally that size predictions will become more stable when lower scattering angles are detected, initial validations were performed on individual polystyrene beads ranging in diameter from 1 to 5 microns. We found that the lower minimum angle enabled the width of this delta-function size distribution to be predicted more accurately. Scattering patterns were then acquired and analyzed from single mouse squamous cell carcinoma cells at multiple time points. The scattering patterns exhibit angular dependencies that look unlike those of any single sphere size, but are well-fit by a broad distribution of sizes, as expected. To determine the fluctuation level in the estimated size distribution due to measurement imperfections alone, formaldehyde-fixed cells were measured. Subsequent measurements on live (non-fixed) cells revealed an order of magnitude greater fluctuation in the estimated sizes compared to fixed cells. With our improved and better-understood approach to single cell angular scattering, we are now capable of reliably detecting changes in organelle size predictions due to biological causes above our measurement error of 20 nm, which enables us to apply our system to future studies of the investigation of various single cell biological processes.

  7. Comparison and Field Validation of Binomial Sampling Plans for Oligonychus perseae (Acari: Tetranychidae) on Hass Avocado in Southern California.

    PubMed

    Lara, Jesus R; Hoddle, Mark S

    2015-08-01

    Oligonychus perseae Tuttle, Baker, & Abatiello is a foliar pest of 'Hass' avocados [Persea americana Miller (Lauraceae)]. The recommended action threshold is 50-100 motile mites per leaf, but this count range and other ecological factors associated with O. perseae infestations limit the application of enumerative sampling plans in the field. Consequently, a comprehensive modeling approach was implemented to compare the practical application of various binomial sampling models for decision-making of O. perseae in California. An initial set of sequential binomial sampling models were developed using three mean-proportion modeling techniques (i.e., Taylor's power law, maximum likelihood, and an empirical model) in combination with two-leaf infestation tally thresholds of either one or two mites. Model performance was evaluated using a robust mite count database consisting of >20,000 Hass avocado leaves infested with varying densities of O. perseae and collected from multiple locations. Operating characteristic and average sample number results for sequential binomial models were used as the basis to develop and validate a standardized fixed-size binomial sampling model with guidelines on sample tree and leaf selection within blocks of avocado trees. This final validated model requires a leaf sampling cost of 30 leaves and takes into account the spatial dynamics of O. perseae to make reliable mite density classifications for a 50-mite action threshold. Recommendations for implementing this fixed-size binomial sampling plan to assess densities of O. perseae in commercial California avocado orchards are discussed. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Accounting for missing data in the estimation of contemporary genetic effective population size (N(e) ).

    PubMed

    Peel, D; Waples, R S; Macbeth, G M; Do, C; Ovenden, J R

    2013-03-01

    Theoretical models are often applied to population genetic data sets without fully considering the effect of missing data. Researchers can deal with missing data by removing individuals that have failed to yield genotypes and/or by removing loci that have failed to yield allelic determinations, but despite their best efforts, most data sets still contain some missing data. As a consequence, realized sample size differs among loci, and this poses a problem for unbiased methods that must explicitly account for random sampling error. One commonly used solution for the calculation of contemporary effective population size (N(e) ) is to calculate the effective sample size as an unweighted mean or harmonic mean across loci. This is not ideal because it fails to account for the fact that loci with different numbers of alleles have different information content. Here we consider this problem for genetic estimators of contemporary effective population size (N(e) ). To evaluate bias and precision of several statistical approaches for dealing with missing data, we simulated populations with known N(e) and various degrees of missing data. Across all scenarios, one method of correcting for missing data (fixed-inverse variance-weighted harmonic mean) consistently performed the best for both single-sample and two-sample (temporal) methods of estimating N(e) and outperformed some methods currently in widespread use. The approach adopted here may be a starting point to adjust other population genetics methods that include per-locus sample size components. © 2012 Blackwell Publishing Ltd.

  9. Collecting a better water-quality sample: Reducing vertical stratification bias in open and closed channels

    USGS Publications Warehouse

    Selbig, William R.

    2017-01-01

    Collection of water-quality samples that accurately characterize average particle concentrations and distributions in channels can be complicated by large sources of variability. The U.S. Geological Survey (USGS) developed a fully automated Depth-Integrated Sample Arm (DISA) as a way to reduce bias and improve accuracy in water-quality concentration data. The DISA was designed to integrate with existing autosampler configurations commonly used for the collection of water-quality samples in vertical profile thereby providing a better representation of average suspended sediment and sediment-associated pollutant concentrations and distributions than traditional fixed-point samplers. In controlled laboratory experiments, known concentrations of suspended sediment ranging from 596 to 1,189 mg/L were injected into a 3 foot diameter closed channel (circular pipe) with regulated flows ranging from 1.4 to 27.8 ft3 /s. Median suspended sediment concentrations in water-quality samples collected using the DISA were within 7 percent of the known, injected value compared to 96 percent for traditional fixed-point samplers. Field evaluation of this technology in open channel fluvial systems showed median differences between paired DISA and fixed-point samples to be within 3 percent. The range of particle size measured in the open channel was generally that of clay and silt. Differences between the concentration and distribution measured between the two sampler configurations could potentially be much larger in open channels that transport larger particles, such as sand.

  10. The Relationship between National-Level Carbon Dioxide Emissions and Population Size: An Assessment of Regional and Temporal Variation, 1960–2005

    PubMed Central

    Jorgenson, Andrew K.; Clark, Brett

    2013-01-01

    This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region. PMID:23437323

  11. Experimental light scattering by small particles: first results with a novel Mueller matrix scatterometer

    NASA Astrophysics Data System (ADS)

    Penttilä, Antti; Maconi, Göran; Kassamakov, Ivan; Gritsevich, Maria; Helander, Petteri; Puranen, Tuomas; Hæggström, Edward; Muinonen, Karri

    2017-06-01

    We describe a setup for measuring the full angular Mueller matrix profile of a single mm- to μm-sized sample, and verify the experimental results against a theoretical model. The scatterometer has a fixed or levitating sample, illuminated with a laser beam whose full polarization state is controlled. The scattered light is detected with a combination of wave retarder, linear polarizer, and photomultiplier tube that is attached to a rotational stage. The first results are reported.

  12. Nanoscale imaging of whole cells using a liquid enclosure and a scanning transmission electron microscope.

    PubMed

    Peckys, Diana B; Veith, Gabriel M; Joy, David C; de Jonge, Niels

    2009-12-14

    Nanoscale imaging techniques are needed to investigate cellular function at the level of individual proteins and to study the interaction of nanomaterials with biological systems. We imaged whole fixed cells in liquid state with a scanning transmission electron microscope (STEM) using a micrometer-sized liquid enclosure with electron transparent windows providing a wet specimen environment. Wet-STEM images were obtained of fixed E. coli bacteria labeled with gold nanoparticles attached to surface membrane proteins. Mammalian cells (COS7) were incubated with gold-tagged epidermal growth factor and fixed. STEM imaging of these cells resulted in a resolution of 3 nm for the gold nanoparticles. The wet-STEM method has several advantages over conventional imaging techniques. Most important is the capability to image whole fixed cells in a wet environment with nanometer resolution, which can be used, e.g., to map individual protein distributions in/on whole cells. The sample preparation is compatible with that used for fluorescent microscopy on fixed cells for experiments involving nanoparticles. Thirdly, the system is rather simple and involves only minimal new equipment in an electron microscopy (EM) laboratory.

  13. High spatial variation in population size and symbiotic performance of Rhizobium leguminosarum bv. trifolii with white clover in New Zealand pasture soils.

    PubMed

    Wakelin, Steven; Tillard, Guyléne; van Ham, Robert; Ballard, Ross; Farquharson, Elizabeth; Gerard, Emily; Geurts, Rene; Brown, Matthew; Ridgway, Hayley; O'Callaghan, Maureen

    2018-01-01

    Biological nitrogen fixation through the legume-rhizobia symbiosis is important for sustainable pastoral production. In New Zealand, the most widespread and valuable symbiosis occurs between white clover (Trifolium repens L.) and Rhizobium leguminosarum bv. trifolii (Rlt). As variation in the population size (determined by most probable number assays; MPN) and effectiveness of N-fixation (symbiotic potential; SP) of Rlt in soils may affect white clover performance, the extent in variation in these properties was examined at three different spatial scales: (1) From 26 sites across New Zealand, (2) at farm-wide scale, and (3) within single fields. Overall, Rlt populations ranged from 95 to >1 x 108 per g soil, with variation similar at the three spatial scales assessed. For almost all samples, there was no relationship between rhizobia population size and ability of the population to fix N during legume symbiosis (SP). When compared with the commercial inoculant strain, the SP of soils ranged between 14 to 143% efficacy. The N-fixing ability of rhizobia populations varied more between samples collected from within a single hill country field (0.8 ha) than between 26 samples collected from diverse locations across New Zealand. Correlations between SP and calcium and aluminium content were found in all sites, except within a dairy farm field. Given the general lack of association between SP and MPN, and high spatial variability of SP at single field scale, provision of advice for treating legume seed with rhizobia based on field-average MPN counts needs to be carefully considered.

  14. High spatial variation in population size and symbiotic performance of Rhizobium leguminosarum bv. trifolii with white clover in New Zealand pasture soils

    PubMed Central

    Tillard, Guyléne; van Ham, Robert; Ballard, Ross; Farquharson, Elizabeth; Gerard, Emily; Geurts, Rene; Brown, Matthew; Ridgway, Hayley; O’Callaghan, Maureen

    2018-01-01

    Biological nitrogen fixation through the legume-rhizobia symbiosis is important for sustainable pastoral production. In New Zealand, the most widespread and valuable symbiosis occurs between white clover (Trifolium repens L.) and Rhizobium leguminosarum bv. trifolii (Rlt). As variation in the population size (determined by most probable number assays; MPN) and effectiveness of N-fixation (symbiotic potential; SP) of Rlt in soils may affect white clover performance, the extent in variation in these properties was examined at three different spatial scales: (1) From 26 sites across New Zealand, (2) at farm-wide scale, and (3) within single fields. Overall, Rlt populations ranged from 95 to >1 x 108 per g soil, with variation similar at the three spatial scales assessed. For almost all samples, there was no relationship between rhizobia population size and ability of the population to fix N during legume symbiosis (SP). When compared with the commercial inoculant strain, the SP of soils ranged between 14 to 143% efficacy. The N-fixing ability of rhizobia populations varied more between samples collected from within a single hill country field (0.8 ha) than between 26 samples collected from diverse locations across New Zealand. Correlations between SP and calcium and aluminium content were found in all sites, except within a dairy farm field. Given the general lack of association between SP and MPN, and high spatial variability of SP at single field scale, provision of advice for treating legume seed with rhizobia based on field-average MPN counts needs to be carefully considered. PMID:29489845

  15. Comparison of Two Methods of RNA Extraction from Formalin-Fixed Paraffin-Embedded Tissue Specimens

    PubMed Central

    Gouveia, Gisele Rodrigues; Ferreira, Suzete Cleusa; Ferreira, Jerenice Esdras; Siqueira, Sheila Aparecida Coelho; Pereira, Juliana

    2014-01-01

    The present study aimed to compare two different methods of extracting RNA from formalin-fixed paraffin-embedded (FFPE) specimens of diffuse large B-cell lymphoma (DLBCL). We further aimed to identify possible influences of variables—such as tissue size, duration of paraffin block storage, fixative type, primers used for cDNA synthesis, and endogenous genes tested—on the success of amplification from the samples. Both tested protocols used the same commercial kit for RNA extraction (the RecoverAll Total Nucleic Acid Isolation Optimized for FFPE Samples from Ambion). However, the second protocol included an additional step of washing with saline buffer just after sample rehydration. Following each protocol, we compared the RNA amount and purity and the amplification success as evaluated by standard PCR and real-time PCR. The results revealed that the extra washing step added to the RNA extraction process resulted in significantly improved RNA quantity and quality and improved success of amplification from paraffin-embedded specimens. PMID:25105117

  16. [Carbon sequestration in soil particle-sized fractions during reversion of desertification at Mu Us Sand land.

    PubMed

    Ma, Jian Ye; Tong, Xiao Gang; Li, Zhan Bin; Fu, Guang Jun; Li, Jiao; Hasier

    2016-11-18

    The aim of this study was to investigate the effects of carbon sequestration in soil particle-sized fractions during reversion of desertification at Mu Us Sand Land, soil samples were collected from quicksand land, semifixed sand and fixed sand lands that were established by the shrub for 20-55 year-old and the arbor for 20-50 year-old at sand control region of Yulin in Northern Shaanxi Province. The dynamics and sequestration rate of soil organic carbon (SOC) associated with sand, silt and clay were measured by physical fractionation method. The results indicated that, compared with quicksand area, the carbon content in total SOC and all soil particle-sized fractions at bothsand-fixing sand forest lands showed a significant increasing trend, and the maximum carbon content was observed in the top layer of soils. From quicksand to fixed sand land with 55-year-old shrub and 50-year-old arbor, the annual sequestration rate of carbon stock in 0-5 cm soil depth was same in silt by 0.05 Mg·hm -2 ·a -1 . The increase rate of carbon sequestration in sand was 0.05 and 0.08 Mg·hm -2 ·a -1 , and in clay was 0.02 and 0.03 Mg·hm -2 ·a -1 at shrubs and arbors land, respectively. The increase rate of carbon sequestration in 0-20 cm soil layer for all the soil particles was averagely 2.1 times as that of 0-5 cm. At the annual increase rate of carbon, the stock of carbon in sand, silt and clay at the two fixed sand lands were increased by 6.7, 18.1 and 4.4 times after 50-55 year-old reversion of quicksand land to fixed sand. In addition, the average percentages that contributed to accumulation of total SOC by different particles in 0-20 cm soil were in the order of silt carbon (39.7%)≈sand carbon (34.6%) > clay carbon (25.6%). Generally, the soil particle-sized fractions had great carbon sequestration potential during reversion of desertification in Mu Us Sand Land, and the slit and sand were the main fractions for carbon sequestration at both fixed sand lands.

  17. Three Dimensional Imaging of Paraffin Embedded Human Lung Tissue Samples by Micro-Computed Tomography

    PubMed Central

    Scott, Anna E.; Vasilescu, Dragos M.; Seal, Katherine A. D.; Keyes, Samuel D.; Mavrogordato, Mark N.; Hogg, James C.; Sinclair, Ian; Warner, Jane A.; Hackett, Tillie-Louise; Lackie, Peter M.

    2015-01-01

    Background Understanding the three-dimensional (3-D) micro-architecture of lung tissue can provide insights into the pathology of lung disease. Micro computed tomography (µCT) has previously been used to elucidate lung 3D histology and morphometry in fixed samples that have been stained with contrast agents or air inflated and dried. However, non-destructive microstructural 3D imaging of formalin-fixed paraffin embedded (FFPE) tissues would facilitate retrospective analysis of extensive tissue archives of lung FFPE lung samples with linked clinical data. Methods FFPE human lung tissue samples (n = 4) were scanned using a Nikon metrology µCT scanner. Semi-automatic techniques were used to segment the 3D structure of airways and blood vessels. Airspace size (mean linear intercept, Lm) was measured on µCT images and on matched histological sections from the same FFPE samples imaged by light microscopy to validate µCT imaging. Results The µCT imaging protocol provided contrast between tissue and paraffin in FFPE samples (15mm x 7mm). Resolution (voxel size 6.7 µm) in the reconstructed images was sufficient for semi-automatic image segmentation of airways and blood vessels as well as quantitative airspace analysis. The scans were also used to scout for regions of interest, enabling time-efficient preparation of conventional histological sections. The Lm measurements from µCT images were not significantly different to those from matched histological sections. Conclusion We demonstrated how non-destructive imaging of routinely prepared FFPE samples by laboratory µCT can be used to visualize and assess the 3D morphology of the lung including by morphometric analysis. PMID:26030902

  18. Status and trends of the rainbow trout population in the Lees Ferry reach of the Colorado River downstream from Glen Canyon Dam, Arizona, 1991–2009

    USGS Publications Warehouse

    Makinster, Andrew S.; Persons, William R.; Avery, Luke A.

    2011-01-01

    The Lees Ferry reach of the Colorado River, a 25-kilometer segment of river located immediately downstream from Glen Canyon Dam, has contained a nonnative rainbow trout (Oncorhynchus mykiss) sport fishery since it was first stocked in 1964. The fishery has evolved over time in response to changes in dam operations and fish management. Long-term monitoring of the rainbow trout population downstream of Glen Canyon Dam is an essential component of the Glen Canyon Dam Adaptive Management Program. A standardized sampling design was implemented in 1991 and has changed several times in response to independent, external scientific-review recommendations and budget constraints. Population metrics (catch per unit effort, proportional stock density, and relative condition) were estimated from 1991 to 2009 by combining data collected at fixed sampling sites during this time period and at random sampling sites from 2002 to 2009. The validity of combining population metrics for data collected at fixed and random sites was confirmed by a one-way analysis of variance by fish-length class size. Analysis of the rainbow trout population metrics from 1991 to 2009 showed that the abundance of rainbow trout increased from 1991 to 1997, following implementation of a more steady flow regime, but declined from about 2000 to 2007. Abundance in 2008 and 2009 was high compared to previous years, which was likely the result of increased early survival caused by improved habitat conditions following the 2008 high-flow experiment at Glen Canyon Dam. Proportional stock density declined between 1991 and 2006, reflecting increased natural reproduction and large numbers of small fish in samples. Since 2001, the proportional stock density has been relatively stable. Relative condition varied with size class of rainbow trout but has been relatively stable since 1991 for fish smaller than 152 millimeters (mm), except for a substantial decrease in 2009. Relative condition was more variable for larger size classes, and substantial decreases were observed for the 152-304-mm size class in 2009 and 305-405-mm size class in 2008 that persisted into 2009.

  19. Defining space use and movements of Canada lynx with global positioning system telemetry

    USGS Publications Warehouse

    Burdett, C.L.; Moen, R.A.; Niemi, G.J.; Mech, L.D.

    2007-01-01

    Space use and movements of Canada lynx (Lynx canadensis) are difficult to study with very-high-frequency radiocollars. We deployed global positioning system (GPS) collars on 11 lynx in Minnesota to study their seasonal space-use patterns. We estimated home ranges with minimum-convex-polygon and fixed-kernel methods and estimated core areas with area/probability curves. Fixed-kernel home ranges of males (range = 29-522 km2) were significantly larger than those of females (range = 5-95 km2) annually and during the denning season. Some male lynx increased movements during March, the month most influenced by breeding activity. Lynx core areas were predicted by the 60% fixed-kernel isopleth in most seasons. The mean core-area size of males (range = 6-190 km2) was significantly larger than that of females (range = 1-19 km2) annually and during denning. Most female lynx were reproductive animals with reduced movements, whereas males often ranged widely between Minnesota and Ontario. Sensitivity analyses examining the effect of location frequency on home-range size suggest that the home-range sizes of breeding females are less sensitive to sample size than those of males. Longer periods between locations decreased home-range and core-area overlap relative to the home range estimated from daily locations. GPS collars improve our understanding of space use and movements by lynx by increasing the spatial extent and temporal frequency of monitoring and allowing home ranges to be estimated over short periods that are relevant to life-history characteristics. ?? 2007 American Society of Mammalogists.

  20. Do fixed-dose combination pills or unit-of-use packaging improve adherence? A systematic review.

    PubMed Central

    Connor, Jennie; Rafter, Natasha; Rodgers, Anthony

    2004-01-01

    Adequate adherence to medication regimens is central to the successful treatment of communicable and noncommunicable disease. Fixed-dose combination pills and unit-of-use packaging are therapy-related interventions that are designed to simplify medication regimens and so potentially improve adherence. We conducted a systematic review of relevant randomized trials in order to quantify the effects of fixed-dose combination pills and unit-of-use packaging, compared with medications as usually presented, in terms of adherence to treatment and improved outcomes. Only 15 trials met the inclusion criteria; fixed-dose combination pills were investigated in three of these, while unit-of-use packaging was studied in 12 trials. The trials involved treatments for communicable diseases (n = 5), blood pressure lowering medications (n = 3), diabetic patients (n = 1), vitamin supplementation (n = 1) and management of multiple medications by the elderly (n = 5). The results of the trials suggested that there were trends towards improved adherence and/or clinical outcomes in all but three of the trials; this reached statistical significance in four out of seven trials reporting a clinically relevant or intermediate end-point, and in seven out of thirteen trials reporting medication adherence. Measures of outcome were, however, heterogeneous, and interpretation was further limited by methodological issues, particularly small sample size, short duration and loss to follow-up. Overall, the evidence suggests that fixed-dose combination pills and unit-of-use packaging are likely to improve adherence in a range of settings, but the limitations of the available evidence means that uncertainty remains about the size of these benefits. PMID:15654408

  1. Dispersion models and sampling of cacao mirid bug Sahlbergella singularis (Hemiptera: Miridae) on Theobroma Cacao in southern Cameroon.

    PubMed

    Bisseleua, D H B; Vidal, Stefan

    2011-02-01

    The spatio-temporal distribution of Sahlbergella singularis Haglung, a major pest of cacao trees (Theobroma cacao) (Malvaceae), was studied for 2 yr in traditional cacao forest gardens in the humid forest area of southern Cameroon. The first objective was to analyze the dispersion of this insect on cacao trees. The second objective was to develop sampling plans based on fixed levels of precision for estimating S. singularis populations. The following models were used to analyze the data: Taylor's power law, Iwao's patchiness regression, the Nachman model, and the negative binomial distribution. Our results document that Taylor's power law was a better fit for the data than the Iwao and Nachman models. Taylor's b and Iwao's β were both significantly >1, indicating that S. singularis aggregated on specific trees. This result was further supported by the calculated common k of 1.75444. Iwao's α was significantly <0, indicating that the basic distribution component of S. singularis was the individual insect. Comparison of negative binomial (NBD) and Nachman models indicated that the NBD model was appropriate for studying S. singularis distribution. Optimal sample sizes for fixed precision levels of 0.10, 0.15, and 0.25 were estimated with Taylor's regression coefficients. Required sample sizes increased dramatically with increasing levels of precision. This is the first study on S. singularis dispersion in cacao plantations. Sampling plans, presented here, should be a tool for research on population dynamics and pest management decisions of mirid bugs on cacao. © 2011 Entomological Society of America

  2. Comparison of point counts and territory mapping for detecting effects of forest management on songbirds

    USGS Publications Warehouse

    Newell, Felicity L.; Sheehan, James; Wood, Petra Bohall; Rodewald, Amanda D.; Buehler, David A.; Keyser, Patrick D.; Larkin, Jeffrey L.; Beachy, Tiffany A.; Bakermans, Marja H.; Boves, Than J.; Evans, Andrea; George, Gregory A.; McDermott, Molly E.; Perkins, Kelly A.; White, Matthew; Wigley, T. Bently

    2013-01-01

    Point counts are commonly used to assess changes in bird abundance, including analytical approaches such as distance sampling that estimate density. Point-count methods have come under increasing scrutiny because effects of detection probability and field error are difficult to quantify. For seven forest songbirds, we compared fixed-radii counts (50 m and 100 m) and density estimates obtained from distance sampling to known numbers of birds determined by territory mapping. We applied point-count analytic approaches to a typical forest management question and compared results to those obtained by territory mapping. We used a before–after control impact (BACI) analysis with a data set collected across seven study areas in the central Appalachians from 2006 to 2010. Using a 50-m fixed radius, variance in error was at least 1.5 times that of the other methods, whereas a 100-m fixed radius underestimated actual density by >3 territories per 10 ha for the most abundant species. Distance sampling improved accuracy and precision compared to fixed-radius counts, although estimates were affected by birds counted outside 10-ha units. In the BACI analysis, territory mapping detected an overall treatment effect for five of the seven species, and effects were generally consistent each year. In contrast, all point-count methods failed to detect two treatment effects due to variance and error in annual estimates. Overall, our results highlight the need for adequate sample sizes to reduce variance, and skilled observers to reduce the level of error in point-count data. Ultimately, the advantages and disadvantages of different survey methods should be considered in the context of overall study design and objectives, allowing for trade-offs among effort, accuracy, and power to detect treatment effects.

  3. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.

  4. Fluid inclusions in Martian samples: Clues to early crustal development and the hydrosphere

    NASA Technical Reports Server (NTRS)

    Brown, Philip E.

    1988-01-01

    Major questions about Mars that could be illuminated by examining fluid inclusions in Martian samples include: (1) the nature, extent and timing of development (and decline) of the hydrosphere that existed on the planet; and (2) the evolution of the crust. Fluid inclusion analyses of appropriate samples could provide critical data to use in comparison with data derived from analogous terrestrial studies. For this study, sample handling and return restrictions are unlikely to be as restrictive as the needs of other investigators. The main constraint is that the samples not be subjected to excessively high temperatures. An aqueous fluid inclusion trapped at elevated pressure and temperature will commonly consist of liquid water and water vapor at room temperature. Heating (such as is done in the laboratory to fix P-V-T data for the inclusion) results in moderate pressure increases up to the liquid-vapor homogenization temperature followed by a sharp increase in pressure with continued heating because the inclusion is effectively a fixed volume system. This increased pressure can rupture the inclusion; precise limits are dependent on size, shape, and composition as well as the host material.

  5. Sparse feature learning for instrument identification: Effects of sampling and pooling methods.

    PubMed

    Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu

    2016-05-01

    Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.

  6. Relationship of follicle size and concentrations of estradiol among cows exhibiting or not exhibiting estrus during a fixed-time AI protocol

    USDA-ARS?s Scientific Manuscript database

    Cows exhibiting estrus near the time of fixed-time AI had greater pregnancy success than cows showing no estrus. The objective of this study was to determine the relationship between follicle size and peak estradiol concentration between cows that did or did not exhibit estrus during a fixed-time AI...

  7. Nanoscale Imaging of Whole Cells Using a Liquid Enclosure and a Scanning Transmission Electron Microscope

    PubMed Central

    Peckys, Diana B.; Veith, Gabriel M.; Joy, David C.; de Jonge, Niels

    2009-01-01

    Nanoscale imaging techniques are needed to investigate cellular function at the level of individual proteins and to study the interaction of nanomaterials with biological systems. We imaged whole fixed cells in liquid state with a scanning transmission electron microscope (STEM) using a micrometer-sized liquid enclosure with electron transparent windows providing a wet specimen environment. Wet-STEM images were obtained of fixed E. coli bacteria labeled with gold nanoparticles attached to surface membrane proteins. Mammalian cells (COS7) were incubated with gold-tagged epidermal growth factor and fixed. STEM imaging of these cells resulted in a resolution of 3 nm for the gold nanoparticles. The wet-STEM method has several advantages over conventional imaging techniques. Most important is the capability to image whole fixed cells in a wet environment with nanometer resolution, which can be used, e.g., to map individual protein distributions in/on whole cells. The sample preparation is compatible with that used for fluorescent microscopy on fixed cells for experiments involving nanoparticles. Thirdly, the system is rather simple and involves only minimal new equipment in an electron microscopy (EM) laboratory. PMID:20020038

  8. Sample size and classification error for Bayesian change-point models with unlabelled sub-groups and incomplete follow-up.

    PubMed

    White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E

    2018-05-01

    Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.

  9. Sample size calculations for stepped wedge and cluster randomised trials: a unified approach

    PubMed Central

    Hemming, Karla; Taljaard, Monica

    2016-01-01

    Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808

  10. A fixed-memory moving, expanding window for obtaining scatter corrections in X-ray CT and other stochastic averages

    NASA Astrophysics Data System (ADS)

    Levine, Zachary H.; Pintar, Adam L.

    2015-11-01

    A simple algorithm for averaging a stochastic sequence of 1D arrays in a moving, expanding window is provided. The samples are grouped in bins which increase exponentially in size so that a constant fraction of the samples is retained at any point in the sequence. The algorithm is shown to have particular relevance for a class of Monte Carlo sampling problems which includes one characteristic of iterative reconstruction in computed tomography. The code is available in the CPC program library in both Fortran 95 and C and is also available in R through CRAN.

  11. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    USGS Publications Warehouse

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  12. Relationship of follicle size and concentrations of estradiol among cows that do and do not exhibit estrus during a fixed-time AI protocol

    USDA-ARS?s Scientific Manuscript database

    Cows that exhibited estrus around the time of fixed-time AI had greater pregnancy success compared to cows that did not. The objective of this study was to determine the relationship between follicle size and peak estradiol concentration between cows that did or did not exhibit estrus during a fixed...

  13. Comparison and assessment of aerial and ground estimates of waterbird colonies

    USGS Publications Warehouse

    Green, M.C.; Luent, M.C.; Michot, T.C.; Jeske, C.W.; Leberg, P.L.

    2008-01-01

    Aerial surveys are often used to quantify sizes of waterbird colonies; however, these surveys would benefit from a better understanding of associated biases. We compared estimates of breeding pairs of waterbirds, in colonies across southern Louisiana, USA, made from the ground, fixed-wing aircraft, and a helicopter. We used a marked-subsample method for ground-counting colonies to obtain estimates of error and visibility bias. We made comparisons over 2 sampling periods: 1) surveys conducted on the same colonies using all 3 methods during 3-11 May 2005 and 2) an expanded fixed-wing and ground-survey comparison conducted over 4 periods (May and Jun, 2004-2005). Estimates from fixed-wing aircraft were approximately 65% higher than those from ground counts for overall estimated number of breeding pairs and for both dark and white-plumaged species. The coefficient of determination between estimates based on ground and fixed-wing aircraft was ???0.40 for most species, and based on the assumption that estimates from the ground were closer to the true count, fixed-wing aerial surveys appeared to overestimate numbers of nesting birds of some species; this bias often increased with the size of the colony. Unlike estimates from fixed-wing aircraft, numbers of nesting pairs made from ground and helicopter surveys were very similar for all species we observed. Ground counts by one observer resulted in underestimated number of breeding pairs by 20% on average. The marked-subsample method provided an estimate of the number of missed nests as well as an estimate of precision. These estimates represent a major advantage of marked-subsample ground counts over aerial methods; however, ground counts are difficult in large or remote colonies. Helicopter surveys and ground counts provide less biased, more precise estimates of breeding pairs than do surveys made from fixed-wing aircraft. We recommend managers employ ground counts using double observers for surveying waterbird colonies when feasible. Fixed-wing aerial surveys may be suitable to determine colony activity and composition of common waterbird species. The most appropriate combination of survey approaches will be based on the need for precise and unbiased estimates, balanced with financial and logistical constraints.

  14. Management implications of long-term tree growth and mortality rates: A modeling study of big-leaf mahogany (Swietenia macrophylla) in the Brazilian Amazon

    Treesearch

    C.M. Free; R.M. Landis; J. Grogan; M.D. Schulze; M. Lentini; O. Dunisch; NO-VALUE

    2014-01-01

    Knowledge of tree age-size relationships is essential towards evaluating the sustainability of harvest regulations that include minimum diameter cutting limits and fixed-length cutting cycles. Although many tropical trees form annual growth rings and can be aged from discs or cores, destructive sampling is not always an option for valuable or threatened species. We...

  15. Fat fractal scaling of drainage networks from a random spatial network model

    USGS Publications Warehouse

    Karlinger, Michael R.; Troutman, Brent M.

    1992-01-01

    An alternative quantification of the scaling properties of river channel networks is explored using a spatial network model. Whereas scaling descriptions of drainage networks previously have been presented using a fractal analysis primarily of the channel lengths, we illustrate the scaling of the surface area of the channels defining the network pattern with an exponent which is independent of the fractal dimension but not of the fractal nature of the network. The methodology presented is a fat fractal analysis in which the drainage basin minus the channel area is considered the fat fractal. Random channel networks within a fixed basin area are generated on grids of different scales. The sample channel networks generated by the model have a common outlet of fixed width and a rule of upstream channel narrowing specified by a diameter branching exponent using hydraulic and geomorphologic principles. Scaling exponents are computed for each sample network on a given grid size and are regressed against network magnitude. Results indicate that the size of the exponents are related to magnitude of the networks and generally decrease as network magnitude increases. Cases showing differences in scaling exponents with like magnitudes suggest a direction of future work regarding other topologic basin characteristics as potential explanatory variables.

  16. Design considerations for case series models with exposure onset measurement error.

    PubMed

    Mohammed, Sandra M; Dalrymple, Lorien S; Sentürk, Damla; Nguyen, Danh V

    2013-02-28

    The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared with the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates

    PubMed Central

    Bartroff, Jay; Song, Jinlin

    2014-01-01

    This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948

  18. Sampling designs for contaminant temporal trend analyses using sedentary species exemplified by the snails Bellamya aeruginosa and Viviparus viviparus.

    PubMed

    Yin, Ge; Danielsson, Sara; Dahlberg, Anna-Karin; Zhou, Yihui; Qiu, Yanling; Nyberg, Elisabeth; Bignert, Anders

    2017-10-01

    Environmental monitoring typically assumes samples and sampling activities to be representative of the population being studied. Given a limited budget, an appropriate sampling strategy is essential to support detecting temporal trends of contaminants. In the present study, based on real chemical analysis data on polybrominated diphenyl ethers in snails collected from five subsites in Tianmu Lake, computer simulation is performed to evaluate three sampling strategies by the estimation of required sample size, to reach a detection of an annual change of 5% with a statistical power of 80% and 90% with a significant level of 5%. The results showed that sampling from an arbitrarily selected sampling spot is the worst strategy, requiring much more individual analyses to achieve the above mentioned criteria compared with the other two approaches. A fixed sampling site requires the lowest sample size but may not be representative for the intended study object e.g. a lake and is also sensitive to changes of that particular sampling site. In contrast, sampling at multiple sites along the shore each year, and using pooled samples when the cost to collect and prepare individual specimens are much lower than the cost for chemical analysis, would be the most robust and cost efficient strategy in the long run. Using statistical power as criterion, the results demonstrated quantitatively the consequences of various sampling strategies, and could guide users with respect of required sample sizes depending on sampling design for long term monitoring programs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Exactly solvable random graph ensemble with extensively many short cycles

    NASA Astrophysics Data System (ADS)

    Aguirre López, Fabián; Barucca, Paolo; Fekom, Mathilde; Coolen, Anthony C. C.

    2018-02-01

    We introduce and analyse ensembles of 2-regular random graphs with a tuneable distribution of short cycles. The phenomenology of these graphs depends critically on the scaling of the ensembles’ control parameters relative to the number of nodes. A phase diagram is presented, showing a second order phase transition from a connected to a disconnected phase. We study both the canonical formulation, where the size is large but fixed, and the grand canonical formulation, where the size is sampled from a discrete distribution, and show their equivalence in the thermodynamical limit. We also compute analytically the spectral density, which consists of a discrete set of isolated eigenvalues, representing short cycles, and a continuous part, representing cycles of diverging size.

  20. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  1. Measuring factor IX activity of nonacog beta pegol with commercially available one-stage clotting and chromogenic assay kits: a two-center study.

    PubMed

    Bowyer, A E; Hillarp, A; Ezban, M; Persson, P; Kitchen, S

    2016-07-01

    Essentials Validated assays are required to precisely measure factor IX (FIX) activity in FIX products. N9-GP and two other FIX products were assessed in various coagulation assay systems at two sites. Large variations in FIX activity measurements were observed for N9-GP using some assays. One-stage and chromogenic assays accurately measuring FIX activity for N9-GP were identified. Background Measurement of factor IX activity (FIX:C) with activated partial thromboplastin time-based one-stage clotting assays is associated with a large degree of interlaboratory variation in samples containing glycoPEGylated recombinant FIX (rFIX), i.e. nonacog beta pegol (N9-GP). Validation and qualification of specific assays and conditions are necessary for the accurate assessment of FIX:C in samples containing N9-GP. Objectives To assess the accuracy of various one-stage clotting and chromogenic assays for measuring FIX:C in samples containing N9-GP as compared with samples containing rFIX or plasma-derived FIX (pdFIX) across two laboratory sites. Methods FIX:C, in severe hemophilia B plasma spiked with a range of concentrations (from very low, i.e. 0.03 IU mL(-1) , to high, i.e. 0.90 IU mL(-1) ) of N9-GP, rFIX (BeneFIX), and pdFIX (Mononine), was determined at two laboratory sites with 10 commercially available one-stage clotting assays and two chromogenic FIX:C assays. Assays were performed with a plasma calibrator and different analyzers. Results A high degree of variation in FIX:C measurement was observed for one-stage clotting assays for N9-GP as compared with rFIX or pdFIX. Acceptable N9-GP recovery was observed in the low-concentration to high-concentration samples tested with one-stage clotting assays using SynthAFax or DG Synth, or with chromogenic FIX:C assays. Similar patterns of FIX:C measurement were observed at both laboratory sites, with minor differences probably being attributable to the use of different analyzers. Conclusions These results suggest that, of the reagents tested, FIX:C in N9-GP-containing plasma samples can be most accurately measured with one-stage clotting assays using SynthAFax or DG Synth, or with chromogenic FIX:C assays. © 2016 International Society on Thrombosis and Haemostasis.

  2. Relative efficiency of unequal versus equal cluster sizes in cluster randomized trials using generalized estimating equation models.

    PubMed

    Liu, Jingxia; Colditz, Graham A

    2018-05-01

    There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Morphological diversity of Trichuris spp. eggs observed during an anthelminthic drug trial in Yunnan, China, and relative performance of parasitologic diagnostic tools.

    PubMed

    Steinmann, Peter; Rinaldi, Laura; Cringoli, Giuseppe; Du, Zun-Wei; Marti, Hanspeter; Jiang, Jin-Yong; Zhou, Hui; Zhou, Xiao-Nong; Utzinger, Jürg

    2015-01-01

    The presence of large Trichuris spp. eggs in human faecal samples is occasionally reported. Such eggs have been described as variant Trichuris trichiura or Trichuris vulpis eggs. Within the frame of a randomised controlled trial, faecal samples collected from 115 Bulang individuals from Yunnan, People's Republic of China were subjected to the Kato-Katz technique (fresh stool samples) and the FLOTAC and ether-concentration techniques (sodium acetate-acetic acid-formalin (SAF)-fixed stool samples). Large Trichuris spp. eggs were noted in faecal samples with a prevalence of 6.1% before and 21.7% after anthelminthic drug administration. The observed prevalence of standard-sized T. trichiura eggs was reduced from 93.0% to 87.0% after treatment. Considerably more cases of large Trichuris spp. eggs and slightly more cases with normal-sized T. trichiura eggs were identified by FLOTAC compared to the ether-concentration technique. No large Trichuris spp. eggs were observed on the Kato-Katz thick smears. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Scalable boson sampling with time-bin encoding using a loop-based architecture.

    PubMed

    Motes, Keith R; Gilchrist, Alexei; Dowling, Jonathan P; Rohde, Peter P

    2014-09-19

    We present an architecture for arbitrarily scalable boson sampling using two nested fiber loops. The architecture has fixed experimental complexity, irrespective of the size of the desired interferometer, whose scale is limited only by fiber and switch loss rates. The architecture employs time-bin encoding, whereby the incident photons form a pulse train, which enters the loops. Dynamically controlled loop coupling ratios allow the construction of the arbitrary linear optics interferometers required for boson sampling. The architecture employs only a single point of interference and may thus be easier to stabilize than other approaches. The scheme has polynomial complexity and could be realized using demonstrated present-day technologies.

  5. How conservative is Fisher's exact test? A quantitative evaluation of the two-sample comparative binomial trial.

    PubMed

    Crans, Gerald G; Shuster, Jonathan J

    2008-08-15

    The debate as to which statistical methodology is most appropriate for the analysis of the two-sample comparative binomial trial has persisted for decades. Practitioners who favor the conditional methods of Fisher, Fisher's exact test (FET), claim that only experimental outcomes containing the same amount of information should be considered when performing analyses. Hence, the total number of successes should be fixed at its observed level in hypothetical repetitions of the experiment. Using conditional methods in clinical settings can pose interpretation difficulties, since results are derived using conditional sample spaces rather than the set of all possible outcomes. Perhaps more importantly from a clinical trial design perspective, this test can be too conservative, resulting in greater resource requirements and more subjects exposed to an experimental treatment. The actual significance level attained by FET (the size of the test) has not been reported in the statistical literature. Berger (J. R. Statist. Soc. D (The Statistician) 2001; 50:79-85) proposed assessing the conservativeness of conditional methods using p-value confidence intervals. In this paper we develop a numerical algorithm that calculates the size of FET for sample sizes, n, up to 125 per group at the two-sided significance level, alpha = 0.05. Additionally, this numerical method is used to define new significance levels alpha(*) = alpha+epsilon, where epsilon is a small positive number, for each n, such that the size of the test is as close as possible to the pre-specified alpha (0.05 for the current work) without exceeding it. Lastly, a sample size and power calculation example are presented, which demonstrates the statistical advantages of implementing the adjustment to FET (using alpha(*) instead of alpha) in the two-sample comparative binomial trial. 2008 John Wiley & Sons, Ltd

  6. The Impact of Accelerating Faster than Exponential Population Growth on Genetic Variation

    PubMed Central

    Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian

    2014-01-01

    Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models’ effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times. PMID:24381333

  7. The impact of accelerating faster than exponential population growth on genetic variation.

    PubMed

    Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian

    2014-03-01

    Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.

  8. Optical absorption and TEM studies of silver nanoparticle embedded BaO-CaF{sub 2}-P{sub 2}O{sub 5} glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayanan, Manoj Kumar, E-mail: manukokkal01@gmail.com; Shashikala, H. D.

    Silver nanoparticle embedded 30BaO-20CaF{sub 2}-50P{sub 2}O{sub 5}-4Ag{sub 2}O-4SnO glasses were prepared by melt-quenching and subsequent heat treatment process. Silver-doped glasses were heat treated at temperatures 500 °C, 525°C and 550 °C for a fixed duration of 10 hours to incorporate metal nanoparticles into the glass matrix. Appearance and shift in peak positions of the surface plasmon resonance (SPR) bands in the optical absorption spectra of heat treated glass samples indicated that both formation and growth of nanoparticle depended on heat treatment temperature. Glass sample heat treated at 525 °C showed a SPR peak around 3 eV, which indicated that sphericalmore » nanoparticles smaller than 20 nm were formed inside the glass matrix. Whereas sample heat treated at 550 °C showed a size dependent red shift in SPR peak due to the presence of silver nanoparticles of size larger than 20 nm. Size of the nanoparticles calculated using full-width at half-maximum (FWHM) of absorption band showed a good agreement with the particle size obtained from transmission electron microscopy (TEM) analysis.« less

  9. Quantifying the impact of time-varying baseline risk adjustment in the self-controlled risk interval design.

    PubMed

    Li, Lingling; Kulldorff, Martin; Russek-Cohen, Estelle; Kawai, Alison Tse; Hua, Wei

    2015-12-01

    The self-controlled risk interval design is commonly used to assess the association between an acute exposure and an adverse event of interest, implicitly adjusting for fixed, non-time-varying covariates. Explicit adjustment needs to be made for time-varying covariates, for example, age in young children. It can be performed via either a fixed or random adjustment. The random-adjustment approach can provide valid point and interval estimates but requires access to individual-level data for an unexposed baseline sample. The fixed-adjustment approach does not have this requirement and will provide a valid point estimate but may underestimate the variance. We conducted a comprehensive simulation study to evaluate their performance. We designed the simulation study using empirical data from the Food and Drug Administration-sponsored Mini-Sentinel Post-licensure Rapid Immunization Safety Monitoring Rotavirus Vaccines and Intussusception study in children 5-36.9 weeks of age. The time-varying confounder is age. We considered a variety of design parameters including sample size, relative risk, time-varying baseline risks, and risk interval length. The random-adjustment approach has very good performance in almost all considered settings. The fixed-adjustment approach can be used as a good alternative when the number of events used to estimate the time-varying baseline risks is at least the number of events used to estimate the relative risk, which is almost always the case. We successfully identified settings in which the fixed-adjustment approach can be used as a good alternative and provided guidelines on the selection and implementation of appropriate analyses for the self-controlled risk interval design. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Technical note: Alternatives to reduce adipose tissue sampling bias.

    PubMed

    Cruz, G D; Wang, Y; Fadel, J G

    2014-10-01

    Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.

  11. A framework for inference about carnivore density from unstructured spatial sampling of scat using detector dogs

    USGS Publications Warehouse

    Thompson, Craig M.; Royle, J. Andrew; Garner, James D.

    2012-01-01

    Wildlife management often hinges upon an accurate assessment of population density. Although undeniably useful, many of the traditional approaches to density estimation such as visual counts, livetrapping, or mark–recapture suffer from a suite of methodological and analytical weaknesses. Rare, secretive, or highly mobile species exacerbate these problems through the reality of small sample sizes and movement on and off study sites. In response to these difficulties, there is growing interest in the use of non-invasive survey techniques, which provide the opportunity to collect larger samples with minimal increases in effort, as well as the application of analytical frameworks that are not reliant on large sample size arguments. One promising survey technique, the use of scat detecting dogs, offers a greatly enhanced probability of detection while at the same time generating new difficulties with respect to non-standard survey routes, variable search intensity, and the lack of a fixed survey point for characterizing non-detection. In order to account for these issues, we modified an existing spatially explicit, capture–recapture model for camera trap data to account for variable search intensity and the lack of fixed, georeferenced trap locations. We applied this modified model to a fisher (Martes pennanti) dataset from the Sierra National Forest, California, and compared the results (12.3 fishers/100 km2) to more traditional density estimates. We then evaluated model performance using simulations at 3 levels of population density. Simulation results indicated that estimates based on the posterior mode were relatively unbiased. We believe that this approach provides a flexible analytical framework for reconciling the inconsistencies between detector dog survey data and density estimation procedures.

  12. Ontogenetic loss of phenotypic plasticity of age at metamorphosis in tadpoles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hensley, F.R.

    1993-12-01

    Amphibian larvae exhibit phenotypic plasticity in size at metamorphosis and duration of the larval period. I used Pseudacris crucifer tadpoles to test two models for predicting tadpole age and size at metamorphosis under changing environmental conditions. The Wilbur-Collins model states that metamorphosis is initiated as a function of a tadpole's size and relative growth rate, and predicts that changes in growth rate throughout the larval period affect age and size at metamorphosis. An alternative model, the fixed-rate model, states that age at metamorphosis is fixed early in larval life, and subsequent changes in growth rate will have no effect onmore » the length of the larval period. My results confirm that food supplies affect both age and size at metamorphosis, but developmental rates became fixed at approximately Gosner (1960) stages 35-37. Neither model completely predicted these results. I suggest that the generally accepted Wilbur-Collins model is improved by incorporating a point of fixed developmental timing. Growth trajectories predicted from this modified model fit the results of this study better than trajectories based on either of the original models. The results of this study suggests a constraint that limits the simultaneous optimization of age and size at metamorphosis. 32 refs., 5 figs., 1 tab.« less

  13. When Is Rapid On-Site Evaluation Cost-Effective for Fine-Needle Aspiration Biopsy?

    PubMed Central

    Schmidt, Robert L.; Walker, Brandon S.; Cohen, Michael B.

    2015-01-01

    Background Rapid on-site evaluation (ROSE) can improve adequacy rates of fine-needle aspiration biopsy (FNAB) but increases operational costs. The performance of ROSE relative to fixed sampling depends on many factors. It is not clear when ROSE is less costly than sampling with a fixed number of needle passes. The objective of this study was to determine the conditions under which ROSE is less costly than fixed sampling. Methods Cost comparison of sampling with and without ROSE using mathematical modeling. Models were based on a societal perspective and used a mechanistic, micro-costing approach. Sampling policies (ROSE, fixed) were compared using the difference in total expected costs per case. Scenarios were based on procedure complexity (palpation-guided or image-guided), adequacy rates (low, high) and sampling protocols (stopping criteria for ROSE and fixed sampling). One-way, probabilistic, and scenario-based sensitivity analysis was performed to determine which variables had the greatest influence on the cost difference. Results ROSE is favored relative to fixed sampling under the following conditions: (1) the cytologist is accurate, (2) the total variable cost ($/hr) is low, (3) fixed costs ($/procedure) are high, (4) the setup time is long, (5) the time between needle passes for ROSE is low, (6) when the per-pass adequacy rate is low, and (7) ROSE stops after observing one adequate sample. The model is most sensitive to variation in the fixed cost, the per-pass adequacy rate, and the time per needle pass with ROSE. Conclusions Mathematical modeling can be used to predict the difference in cost between sampling with and without ROSE. PMID:26317785

  14. Detection of melting by X-ray imaging at high pressure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Li; Weidner, Donald J.

    2014-06-15

    The occurrence of partial melting at elevated pressure and temperature is documented in real time through measurement of volume strain induced by a fixed temperature change. Here we present the methodology for measuring volume strains to one part in 10{sup −4} for mm{sup 3} sized samples in situ as a function of time during a step in temperature. By calibrating the system for sample thermal expansion at temperatures lower than the solidus, the onset of melting can be detected when the melting volume increase is of comparable size to the thermal expansion induced volume change. We illustrate this technique withmore » a peridotite sample at 1.5 GPa during partial melting. The Re capsule is imaged with a CCD camera at 20 frames/s. Temperature steps of 100 K induce volume strains that triple with melting. The analysis relies on image comparison for strain determination and the thermal inertia of the sample is clearly seen in the time history of the volume strain. Coupled with a thermodynamic model of the melting, we infer that we identify melting with 2 vol.% melting.« less

  15. Impact of rail pressure and biodiesel fueling on the particulate morphology and soot nanostructures from a common-rail turbocharged direct injection diesel engine

    DOE PAGES

    Ye, Peng; Vander Wal, Randy; Boehman, Andre L.; ...

    2014-12-26

    The effect of rail pressure and biodiesel fueling on the morphology of exhaust particulate agglomerates and the nanostructure of primary particles (soot) was investigated with a common-rail turbocharged direct injection diesel engine. The engine was operated at steady state on a dynamometer running at moderate speed with both low (30%) and medium–high (60%) fixed loads, and exhaust particulate was sampled for analysis. Ultra-low sulfur diesel and its 20% v/v blends with soybean methyl ester biodiesel were used. Fuel injection occurred in a single event around top dead center at three different injection pressures. Exhaust particulate samples were characterized with TEMmore » imaging, scanning mobility particle sizing, thermogravimetric analysis, Raman spectroscopy, and XRD analysis. Particulate morphology and oxidative reactivity were found to vary significantly with rail pressure and with biodiesel blend level. Higher biodiesel content led to increases in the primary particle size and oxidative reactivity but did not affect nanoscale disorder in the as-received samples. For particulates generated with higher injection pressures, the initial oxidative reactivity increased, but there was no detectable correlation with primary particle size or nanoscale disorder.« less

  16. Impact of rail pressure and biodiesel fueling on the particulate morphology and soot nanostructures from a common-rail turbocharged direct injection diesel engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Peng; Vander Wal, Randy; Boehman, Andre L.

    The effect of rail pressure and biodiesel fueling on the morphology of exhaust particulate agglomerates and the nanostructure of primary particles (soot) was investigated with a common-rail turbocharged direct injection diesel engine. The engine was operated at steady state on a dynamometer running at moderate speed with both low (30%) and medium–high (60%) fixed loads, and exhaust particulate was sampled for analysis. Ultra-low sulfur diesel and its 20% v/v blends with soybean methyl ester biodiesel were used. Fuel injection occurred in a single event around top dead center at three different injection pressures. Exhaust particulate samples were characterized with TEMmore » imaging, scanning mobility particle sizing, thermogravimetric analysis, Raman spectroscopy, and XRD analysis. Particulate morphology and oxidative reactivity were found to vary significantly with rail pressure and with biodiesel blend level. Higher biodiesel content led to increases in the primary particle size and oxidative reactivity but did not affect nanoscale disorder in the as-received samples. For particulates generated with higher injection pressures, the initial oxidative reactivity increased, but there was no detectable correlation with primary particle size or nanoscale disorder.« less

  17. A comparative appraisal of two equivalence tests for multiple standardized effects.

    PubMed

    Shieh, Gwowen

    2016-04-01

    Equivalence testing is recommended as a better alternative to the traditional difference-based methods for demonstrating the comparability of two or more treatment effects. Although equivalent tests of two groups are widely discussed, the natural extensions for assessing equivalence between several groups have not been well examined. This article provides a detailed and schematic comparison of the ANOVA F and the studentized range tests for evaluating the comparability of several standardized effects. Power and sample size appraisals of the two grossly distinct approaches are conducted in terms of a constraint on the range of the standardized means when the standard deviation of the standardized means is fixed. Although neither method is uniformly more powerful, the studentized range test has a clear advantage in sample size requirements necessary to achieve a given power when the underlying effect configurations are close to the priori minimum difference for determining equivalence. For actual application of equivalence tests and advance planning of equivalence studies, both SAS and R computer codes are available as supplementary files to implement the calculations of critical values, p-values, power levels, and sample sizes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Microscopic histological characteristics of soft tissue sarcomas: analysis of tissue features and electrical resistance.

    PubMed

    Tosi, A L; Campana, L G; Dughiero, F; Forzan, M; Rastrelli, M; Sieni, E; Rossi, C R

    2017-07-01

    Tissue electrical conductivity is correlated with tissue characteristics. In this work, some soft tissue sarcomas (STS) excised from patients have been evaluated in terms of histological characteristics (cell size and density) and electrical resistance. The electrical resistance has been measured using the ex vivo study on soft tissue tumors electrical characteristics (ESTTE) protocol proposed by the authors in order to study electrical resistance of surgical samples excised by patients in a fixed measurement setup. The measurement setup includes a voltage pulse generator (700 V, 100 µs long at 5 kHz, period 200 µs) and an electrode with 7 needles, 20 mm-long, with the same distance arranged in a fixed hexagonal geometry. In the ESTTE protocol, the same voltage pulse sequence is applied to each different tumor mass and the corresponding resistance has been evaluated from voltage and current recorded by the equipment. For each tumor mass, a histological sample of the volume treated by means of voltage pulses has been taken for histological analysis. Each mass has been studied in order to identify the sarcoma type. For each histological sample, an image at 20× or 40× of magnification was acquired. In this work, the electrical resistance measured for each tumor has been correlated with tissue characteristics like the type, size and density of cells. This work presents a preliminary study to explore possible correlations between tissue characteristics and electrical resistance of STS. These results can be helpful to adjust the pulse voltage intensity in order to improve the electrochemotherapy efficacy on some histotype of STS.

  19. Volatility measurement with directional change in Chinese stock market: Statistical property and investment strategy

    NASA Astrophysics Data System (ADS)

    Ma, Junjun; Xiong, Xiong; He, Feng; Zhang, Wei

    2017-04-01

    The stock price fluctuation is studied in this paper with intrinsic time perspective. The event, directional change (DC) or overshoot, are considered as time scale of price time series. With this directional change law, its corresponding statistical properties and parameter estimation is tested in Chinese stock market. Furthermore, a directional change trading strategy is proposed for invest in the market portfolio in Chinese stock market, and both in-sample and out-of-sample performance are compared among the different method of model parameter estimation. We conclude that DC method can capture important fluctuations in Chinese stock market and gain profit due to the statistical property that average upturn overshoot size is bigger than average downturn directional change size. The optimal parameter of DC method is not fixed and we obtained 1.8% annual excess return with this DC-based trading strategy.

  20. A random-sum Wilcoxon statistic and its application to analysis of ROC and LROC data.

    PubMed

    Tang, Liansheng Larry; Balakrishnan, N

    2011-01-01

    The Wilcoxon-Mann-Whitney statistic is commonly used for a distribution-free comparison of two groups. One requirement for its use is that the sample sizes of the two groups are fixed. This is violated in some of the applications such as medical imaging studies and diagnostic marker studies; in the former, the violation occurs since the number of correctly localized abnormal images is random, while in the latter the violation is due to some subjects not having observable measurements. For this reason, we propose here a random-sum Wilcoxon statistic for comparing two groups in the presence of ties, and derive its variance as well as its asymptotic distribution for large sample sizes. The proposed statistic includes the regular Wilcoxon rank-sum statistic. Finally, we apply the proposed statistic for summarizing location response operating characteristic data from a liver computed tomography study, and also for summarizing diagnostic accuracy of biomarker data.

  1. Comparison of statistical models to estimate parasite growth rate in the induced blood stage malaria model.

    PubMed

    Wockner, Leesa F; Hoffmann, Isabell; O'Rourke, Peter; McCarthy, James S; Marquart, Louise

    2017-08-25

    The efficacy of vaccines aimed at inhibiting the growth of malaria parasites in the blood can be assessed by comparing the growth rate of parasitaemia in the blood of subjects treated with a test vaccine compared to controls. In studies using induced blood stage malaria (IBSM), a type of controlled human malaria infection, parasite growth rate has been measured using models with the intercept on the y-axis fixed to the inoculum size. A set of statistical models was evaluated to determine an optimal methodology to estimate parasite growth rate in IBSM studies. Parasite growth rates were estimated using data from 40 subjects published in three IBSM studies. Data was fitted using 12 statistical models: log-linear, sine-wave with the period either fixed to 48 h or not fixed; these models were fitted with the intercept either fixed to the inoculum size or not fixed. All models were fitted by individual, and overall by study using a mixed effects model with a random effect for the individual. Log-linear models and sine-wave models, with the period fixed or not fixed, resulted in similar parasite growth rate estimates (within 0.05 log 10 parasites per mL/day). Average parasite growth rate estimates for models fitted by individual with the intercept fixed to the inoculum size were substantially lower by an average of 0.17 log 10 parasites per mL/day (range 0.06-0.24) compared with non-fixed intercept models. Variability of parasite growth rate estimates across the three studies analysed was substantially higher (3.5 times) for fixed-intercept models compared with non-fixed intercept models. The same tendency was observed in models fitted overall by study. Modelling data by individual or overall by study had minimal effect on parasite growth estimates. The analyses presented in this report confirm that fixing the intercept to the inoculum size influences parasite growth estimates. The most appropriate statistical model to estimate the growth rate of blood-stage parasites in IBSM studies appears to be a log-linear model fitted by individual and with the intercept estimated in the log-linear regression. Future studies should use this model to estimate parasite growth rates.

  2. Application of Aerosol Hygroscopicity Measured at the Atmospheric Radiation Measurement Program's Southern Great Plains Site to Examine Composition and Evolution

    NASA Technical Reports Server (NTRS)

    Gasparini, Roberto; Runjun, Li; Collins, Don R.; Ferrare, Richard A.; Brackett, Vincent G.

    2006-01-01

    A Differential Mobility Analyzer/Tandem Differential Mobility Analyzer (DMA/TDMA) was used to measure submicron aerosol size distributions, hygroscopicity, and occasionally volatility during the May 2003 Aerosol Intensive Operational Period (IOP) at the Central Facility of the Atmospheric Radiation Measurement Program's Southern Great Plains (ARM SGP) site. Hygroscopic growth factor distributions for particles at eight dry diameters ranging from 0.012 micrometers to 0.600 micrometers were measured throughout the study. For a subset of particle sizes, more detailed measurements were occasionally made in which the relative humidity or temperature to which the aerosol was exposed was varied over a wide range. These measurements, in conjunction with backtrajectory clustering, were used to infer aerosol composition and to gain insight into the processes responsible for evolution. The hygroscopic growth of both the smallest and largest particles analyzed was typically less than that of particles with dry diameters of about 0.100 micrometers. It is speculated that condensation of secondary organic aerosol on nucleation mode particles is largely responsible for the minimal hygroscopic growth observed at the smallest sizes considered. Growth factor distributions of the largest particles characterized typically contained a nonhygroscopic mode believed to be composed primarily of dust. A model was developed to characterize the hygroscopic properties of particles within a size distribution mode through analysis of the fixed size hygroscopic growth measurements. The performance of this model was quantified through comparison of the measured fixed size hygroscopic growth factor distributions with those simulated through convolution of the size-resolved concentration contributed by each of the size modes and the mode-resolved hygroscopicity. This transformation from sizeresolved hygroscopicity to mode-resolved hygroscopicity facilitated examination of changes in the hygroscopic properties of particles within a size distribution mode that accompanied changes in the sizes of those particles. This model was used to examine three specific cases in which the sampled aerosol evolved slowly over a period of hours or days.

  3. Tissue recommendations for precision cancer therapy using next generation sequencing: a comprehensive single cancer center’s experiences

    PubMed Central

    Hong, Mineui; Bang, Heejin; Van Vrancken, Michael; Kim, Seungtae; Lee, Jeeyun; Park, Se Hoon; Park, Joon Oh; Park, Young Suk; Lim, Ho Yeong; Kang, Won Ki; Sun, Jong-Mu; Lee, Se Hoon; Ahn, Myung-Ju; Park, Keunchil; Kim, Duk Hwan; Lee, Seunggwan; Park, Woongyang; Kim, Kyoung-Mee

    2017-01-01

    To generate accurate next-generation sequencing (NGS) data, the amount and quality of DNA extracted is critical. We analyzed 1564 tissue samples from patients with metastatic or recurrent solid tumor submitted for NGS according to their sample size, acquisition method, organ, and fixation to propose appropriate tissue requirements. Of the 1564 tissue samples, 481 (30.8%) consisted of fresh-frozen (FF) tissue, and 1,083 (69.2%) consisted of formalin-fixed paraffin-embedded (FFPE) tissue. We obtained successful NGS results in 95.9% of cases. Out of 481 FF biopsies, 262 tissue samples were from lung, and the mean fragment size was 2.4 mm. Compared to lung, GI tract tumor fragments showed a significantly lower DNA extraction failure rate (2.1 % versus 6.1%, p = 0.04). For FFPE biopsy samples, the size of biopsy tissue was similar regardless of tumor type with a mean of 0.8 × 0.3 cm, and the mean DNA yield per one unstained slide was 114 ng. We obtained highest amount of DNA from the colorectum (2353 ng) and the lowest amount from the hepatobiliary tract (760.3 ng) likely due to a relatively smaller biopsy size, extensive hemorrhage and necrosis, and lower tumor volume. On one unstained slide from FFPE operation specimens, the mean size of the specimen was 2.0 × 1.0 cm, and the mean DNA yield per one unstained slide was 1800 ng. In conclusions, we present our experiences on tissue requirements for appropriate NGS workflow: > 1 mm2 for FF biopsy, > 5 unstained slides for FFPE biopsy, and > 1 unstained slide for FFPE operation specimens for successful test results in 95.9% of cases. PMID:28477007

  4. Statistical analysis of hydrological response in urbanising catchments based on adaptive sampling using inter-amount times

    NASA Astrophysics Data System (ADS)

    ten Veldhuis, Marie-Claire; Schleiss, Marc

    2017-04-01

    In this study, we introduced an alternative approach for analysis of hydrological flow time series, using an adaptive sampling framework based on inter-amount times (IATs). The main difference with conventional flow time series is the rate at which low and high flows are sampled: the unit of analysis for IATs is a fixed flow amount, instead of a fixed time window. We analysed statistical distributions of flows and IATs across a wide range of sampling scales to investigate sensitivity of statistical properties such as quantiles, variance, skewness, scaling parameters and flashiness indicators to the sampling scale. We did this based on streamflow time series for 17 (semi)urbanised basins in North Carolina, US, ranging from 13 km2 to 238 km2 in size. Results showed that adaptive sampling of flow time series based on inter-amounts leads to a more balanced representation of low flow and peak flow values in the statistical distribution. While conventional sampling gives a lot of weight to low flows, as these are most ubiquitous in flow time series, IAT sampling gives relatively more weight to high flow values, when given flow amounts are accumulated in shorter time. As a consequence, IAT sampling gives more information about the tail of the distribution associated with high flows, while conventional sampling gives relatively more information about low flow periods. We will present results of statistical analyses across a range of subdaily to seasonal scales and will highlight some interesting insights that can be derived from IAT statistics with respect to basin flashiness and impact urbanisation on hydrological response.

  5. Spatial Distribution and Sampling Plans With Fixed Level of Precision for Citrus Aphids (Hom., Aphididae) on Two Orange Species.

    PubMed

    Kafeshani, Farzaneh Alizadeh; Rajabpour, Ali; Aghajanzadeh, Sirous; Gholamian, Esmaeil; Farkhari, Mohammad

    2018-04-02

    Aphis spiraecola Patch, Aphis gossypii Glover, and Toxoptera aurantii Boyer de Fonscolombe are three important aphid pests of citrus orchards. In this study, spatial distributions of the aphids on two orange species, Satsuma mandarin and Thomson navel, were evaluated using Taylor's power law and Iwao's patchiness. In addition, a fixed-precision sequential sampling plant was developed for each species on the host plant by Green's model at precision levels of 0.25 and 0.1. The results revealed that spatial distribution parameters and therefore the sampling plan were significantly different according to aphid and host plant species. Taylor's power law provides a better fit for the data than Iwao's patchiness regression. Except T. aurantii on Thomson navel orange, spatial distribution patterns of the aphids were aggregative on both citrus. T. aurantii had regular dispersion pattern on Thomson navel orange. Optimum sample size of the aphids varied from 30-2061 and 1-1622 shoots on Satsuma mandarin and Thomson navel orange based on aphid species and desired precision level. Calculated stop lines of the aphid species on Satsuma mandarin and Thomson navel orange ranged from 0.48 to 19 and 0.19 to 80.4 aphids per 24 shoots according to aphid species and desired precision level. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans (RVSP) software. This sampling program is useful for IPM program of the aphids in citrus orchards.

  6. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    USGS Publications Warehouse

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Olin E.; Irwin, Brian J.; Beasley, James

    2016-01-01

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.

  7. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less

  8. Optimization of Scat Detection Methods for a Social Ungulate, the Wild Pig, and Experimental Evaluation of Factors Affecting Detection of Scat.

    PubMed

    Keiter, David A; Cunningham, Fred L; Rhodes, Olin E; Irwin, Brian J; Beasley, James C

    2016-01-01

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.

  9. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    DOE PAGES

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.; ...

    2016-05-25

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less

  10. THE EFFECTS OF FIXED VERSUS ESCALATING REINFORCEMENT SCHEDULES ON SMOKING ABSTINENCE

    PubMed Central

    Romanowich, Paul; Lamb, R. J.

    2015-01-01

    Studies indicate that when abstinence is initiated, escalating reinforcement schedules maintain continuous abstinence longer than fixed reinforcement schedules. However, these studies were conducted for shorter durations than most clinical trials and also resulted in larger reinforcer value for escalating participants during the 1st week of the experiment. We tested whether escalating reinforcement schedules maintained abstinence longer than fixed reinforcement schedules in a 12-week clinical trial. Smokers (146) were randomized to an escalating reinforcement schedule, a fixed reinforcement schedule, or a control condition. Escalating reinforcement participants received $5.00 for their first breath carbon monoxide (CO) sample <3 ppm, with a $0.50 increase for each consecutive sample. Fixed reinforcement participants received $19.75 for each breath CO sample <3 ppm. Control participants received payments only for delivering a breath CO sample. Similar proportions of escalating and fixed reinforcement participants met the breath CO criterion at least once. Escalating reinforcement participants maintained criterion breath CO levels longer than fixed reinforcement and control participants. Similar to previous short-term studies, escalating reinforcement schedules maintained longer durations of abstinence than fixed reinforcement schedules during a clinical trial. PMID:25640764

  11. Winter home-range characteristics of American Marten (Martes americana) in Northern Wisconsin

    Treesearch

    Joseph B. Dumyahn; Patrick A. Zollner

    2007-01-01

    We estimated home-range size for American marten (Martes americana) in northern Wisconsin during the winter months of 2001-2004, and compared the proportion of cover-type selection categories (highly used, neutral and avoided) among home-ranges (95% fixed-kernel), core areas (50% fixed-kernel) and the study area. Average winter homerange size was 3....

  12. 40 CFR 113.4 - Size classes and associated liability limits for fixed onshore oil storage facilities, 1,000...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 23 2012-07-01 2012-07-01 false Size classes and associated liability limits for fixed onshore oil storage facilities, 1,000 barrels or less capacity. 113.4 Section 113.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS LIABILITY LIMITS FOR...

  13. 40 CFR 113.4 - Size classes and associated liability limits for fixed onshore oil storage facilities, 1,000...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 22 2014-07-01 2013-07-01 true Size classes and associated liability limits for fixed onshore oil storage facilities, 1,000 barrels or less capacity. 113.4 Section 113.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS LIABILITY LIMITS FOR...

  14. Effect of particle size in preparative reversed-phase high-performance liquid chromatography on the isolation of epigallocatechin gallate from Korean green tea.

    PubMed

    Kim, Jung Il; Hong, Seung Bum; Row, Kyung Ho

    2002-03-08

    To isolate epigallocatechin gallate (EGCG) of catechin compounds from Korean green tea (Bosung, Chonnam), a C18 reversed-phase preparative column (250x22 mm) packed with packings of three different sizes (15, 40-63, and 150 microm) was used. The sample extracted with water was partitioned with chloroform and ethyl acetate to remove the impurities including caffeine. The mobile phases in this experiment were composed of 0.1% acetic acid in water, acetonitrile, methanol and ethyl acetate. The injection volume was fixed at 400 microl and the flow rate was increased as the particle size becomes larger. The isolation of EGCG with particle size was compared at a preparative scale and the feasibility of separation of EGCG at larger particle sizes was confirmed. The optimum mobile phase composition for separating EGCG was experimentally obtained at the particle sizes of 15 and 40-63 microm in the isocratic mode, but EGCG was not purely separated at the particle size of 150 microm.

  15. Fixed-interval matching-to-sample: intermatching time and intermatching error runs1

    PubMed Central

    Nelson, Thomas D.

    1978-01-01

    Four pigeons were trained on a matching-to-sample task in which reinforcers followed either the first matching response (fixed interval) or the fifth matching response (tandem fixed-interval fixed-ratio) that occurred 80 seconds or longer after the last reinforcement. Relative frequency distributions of the matching-to-sample responses that concluded intermatching times and runs of mismatches (intermatching error runs) were computed for the final matching responses directly followed by grain access and also for the three matching responses immediately preceding the final match. Comparison of these two distributions showed that the fixed-interval schedule arranged for the preferential reinforcement of matches concluding relatively extended intermatching times and runs of mismatches. Differences in matching accuracy and rate during the fixed interval, compared to the tandem fixed-interval fixed-ratio, suggested that reinforcers following matches concluding various intermatching times and runs of mismatches influenced the rate and accuracy of the last few matches before grain access, but did not control rate and accuracy throughout the entire fixed-interval period. PMID:16812032

  16. Genetic Mapping of Fixed Phenotypes: Disease Frequency as a Breed Characteristic

    PubMed Central

    Jones, Paul; Martin, Alan; Ostrander, Elaine A.; Lark, Karl G.

    2009-01-01

    Traits that have been stringently selected to conform to specific criteria in a closed population are phenotypic stereotypes. In dogs, Canis familiaris, such stereotypes have been produced by breeding for conformation, performance (behaviors), etc. We measured phenotypes on a representative sample to establish breed stereotypes. DNA samples from 147 dog breeds were used to characterize single nucleotide polymorphism allele frequencies for association mapping of breed stereotypes. We identified significant size loci (quantitative trait loci [QTLs]), implicating candidate genes appropriate to regulation of size (e.g., IGF1, IGF2BP2 SMAD2, etc.). Analysis of other morphological stereotypes, also under extreme selection, identified many additional significant loci. Behavioral loci for herding, pointing, and boldness implicated candidate genes appropriate to behavior (e.g., MC2R, DRD1, and PCDH9). Significant loci for longevity, a breed characteristic inversely correlated with breed size, were identified. The power of this approach to identify loci regulating the incidence of specific polygenic diseases is demonstrated by the association of a specific IGF1 haplotype with hip dysplasia, patella luxation, and pacreatitis. PMID:19321632

  17. Genetic mapping of fixed phenotypes: disease frequency as a breed characteristic.

    PubMed

    Chase, Kevin; Jones, Paul; Martin, Alan; Ostrander, Elaine A; Lark, Karl G

    2009-01-01

    Traits that have been stringently selected to conform to specific criteria in a closed population are phenotypic stereotypes. In dogs, Canis familiaris, such stereotypes have been produced by breeding for conformation, performance (behaviors), etc. We measured phenotypes on a representative sample to establish breed stereotypes. DNA samples from 147 dog breeds were used to characterize single nucleotide polymorphism allele frequencies for association mapping of breed stereotypes. We identified significant size loci (quantitative trait loci [QTLs]), implicating candidate genes appropriate to regulation of size (e.g., IGF1, IGF2BP2 SMAD2, etc.). Analysis of other morphological stereotypes, also under extreme selection, identified many additional significant loci. Behavioral loci for herding, pointing, and boldness implicated candidate genes appropriate to behavior (e.g., MC2R, DRD1, and PCDH9). Significant loci for longevity, a breed characteristic inversely correlated with breed size, were identified. The power of this approach to identify loci regulating the incidence of specific polygenic diseases is demonstrated by the association of a specific IGF1 haplotype with hip dysplasia, patella luxation, and pancreatitis.

  18. Development of sampling plans for cotton bolls injured by stink bugs (Hemiptera: Pentatomidae).

    PubMed

    Reay-Jones, F P F; Toews, M D; Greene, J K; Reeves, R B

    2010-04-01

    Cotton, Gossypium hirsutum L., bolls were sampled in commercial fields for stink bug (Hemiptera: Pentatomidae) injury during 2007 and 2008 in South Carolina and Georgia. Across both years of this study, boll-injury percentages averaged 14.8 +/- 0.3 (SEM). At average boll injury treatment levels of 10, 20, 30, and 50%, the percentage of samples with at least one injured boll was 82, 97, 100, and 100%, respectively. Percentage of field-sampling date combinations with average injury < 10, 20, 30, and 50% was 35, 80, 95, and 99%, respectively. At the average of 14.8% boll injury or 2.9 injured bolls per 20-boll sample, 112 samples at Dx = 0.1 (within 10% of the mean) were required for population estimation, compared with only 15 samples at Dx = 0.3. Using a sample size of 20 bolls, our study indicated that, at the 10% threshold and alpha = beta = 0.2 (with 80% confidence), control was not needed when <1.03 bolls were injured. The sampling plan required continued sampling for a range of 1.03-3.8 injured bolls per 20-boll sample. Only when injury was > 3.8 injured bolls per 20-boll sample was a control measure needed. Sequential sampling plans were also determined for thresholds of 20, 30, and 50% injured bolls. Sample sizes for sequential sampling plans were significantly reduced when compared with a fixed sampling plan (n=10) for all thresholds and error rates.

  19. Apparatus and method for measuring minority carrier lifetimes in semiconductor materials

    DOEpatents

    Ahrenkiel, Richard K.; Johnston, Steven W.

    2001-01-01

    An apparatus for determining the minority carrier lifetime of a semiconductor sample includes a positioner for moving the sample relative to a coil. The coil is connected to a bridge circuit such that the impedance of one arm of the bridge circuit is varied as sample is positioned relative to the coil. The sample is positioned relative to the coil such that any change in the photoconductance of the sample created by illumination of the sample creates a linearly related change in the input impedance of the bridge circuit. In addition, the apparatus is calibrated to work at a fixed frequency so that the apparatus maintains a consistently high sensitivity and high linearity for samples of different sizes, shapes, and material properties. When a light source illuminates the sample, the impedance of the bridge circuit is altered as excess carriers are generated in the sample, thereby producing a measurable signal indicative of the minority carrier lifetimes or recombination rates of the sample.

  20. Apparatus for measuring minority carrier lifetimes in semiconductor materials

    DOEpatents

    Ahrenkiel, R.K.

    1999-07-27

    An apparatus for determining the minority carrier lifetime of a semiconductor sample includes a positioner for moving the sample relative to a coil. The coil is connected to a bridge circuit such that the impedance of one arm of the bridge circuit is varied as sample is positioned relative to the coil. The sample is positioned relative to the coil such that any change in the photoconductance of the sample created by illumination of the sample creates a linearly related change in the input impedance of the bridge circuit. In addition, the apparatus is calibrated to work at a fixed frequency so that the apparatus maintains a consistently high sensitivity and high linearly for samples of different sizes, shapes, and material properties. When a light source illuminates the sample, the impedance of the bridge circuit is altered as excess carriers are generated in the sample, thereby producing a measurable signal indicative of the minority carrier lifetimes or recombination rates of the sample. 17 figs.

  1. Apparatus for measuring minority carrier lifetimes in semiconductor materials

    DOEpatents

    Ahrenkiel, Richard K.

    1999-01-01

    An apparatus for determining the minority carrier lifetime of a semiconductor sample includes a positioner for moving the sample relative to a coil. The coil is connected to a bridge circuit such that the impedance of one arm of the bridge circuit is varied as sample is positioned relative to the coil. The sample is positioned relative to the coil such that any change in the photoconductance of the sample created by illumination of the sample creates a linearly related change in the input impedance of the bridge circuit. In addition, the apparatus is calibrated to work at a fixed frequency so that the apparatus maintains a consistently high sensitivity and high linearly for samples of different sizes, shapes, and material properties. When a light source illuminates the sample, the impedance of the bridge circuit is altered as excess carriers are generated in the sample, thereby producing a measurable signal indicative of the minority carrier lifetimes or recombination rates of the sample.

  2. Liquid-based cytology and cell block immunocytochemistry in veterinary medicine: comparison with standard cytology for the evaluation of canine lymphoid samples.

    PubMed

    Fernandes, N C C A; Guerra, J M; Réssio, R A; Wasques, D G; Etlinger-Colonelli, D; Lorente, S; Nogueira, E; Dagli, M L Z

    2016-08-01

    Liquid-based Cytology (LBC) consists of immediate wet cell fixation with automated slide preparation. We applied LBC, cell block (CB) and immunocytochemistry to diagnose canine lymphoma and compare results with conventional cytology. Samples from enlarged lymph nodes of 18 dogs were collected and fixed in preservative solution for automated slide preparation (LBC), CB inclusion and immunophenotyping. Two CB techniques were tested: fixed sediment method (FSM) and agar method (AM). Anti-CD79a, anti-Pax5, anti-CD3 and anti-Ki67 were used in immunocytochemistry. LBC smears showed better nuclear and nucleolar definition, without cell superposition, but presented smaller cell size and worse cytoplasmic definition. FSM showed consistent cellular groups and were employed for immunocytochemistry, while AM CBs presented sparse groups of lymphocytes, with compromised analysis. Anti-Pax-5 allowed B-cell identification, both in reactive and neoplastic lymph nodes. Our preliminary report suggests that LBC and FSM together may be promising tools to improve lymphoma diagnosis through fine-needle aspiration. © 2015 John Wiley & Sons Ltd.

  3. Seabed texture and composition changes offshore of Port Royal Sound, South Carolina before and after the dredging for beach nourishment

    NASA Astrophysics Data System (ADS)

    Xu, Kehui; Sanger, Denise; Riekerk, George; Crowe, Stacie; Van Dolah, Robert F.; Wren, P. Ansley; Ma, Yanxia

    2014-08-01

    Beach nourishment has been a strategy widely used to slow down coastal erosion in many beaches around the world. The dredging of sand at the borrow site, however, can have complicated physical, geological and ecological impacts. Our current knowledge is insufficient to make accurate predictions of sediment infilling in many dredging pits due to lack of detailed sediment data. Two sites in the sandy shoal southeast of Port Royal Sound (PRS) of South Carolina, USA, were sampled 8 times from April 2010 to March 2013; one site (defined as 'borrow site') was 2 km offshore and used as the dredging site for beach nourishment of nearby Hilton Head Island in Beaufort County, South Carolina, and the other site (defined as 'reference site') was 10 km offshore and not directly impacted by the dredging. A total of 184 surficial sediment samples were collected randomly at two sites during 8 sampling periods. Most sediments were fine sand, with an average grain size of 2.3 phi and an organic matter content less than 2%. After the dredging in December 2011-January 2012, sediments at the borrow site became finer, changing from 1.0 phi to 2.3 phi, and carbonate content decreased from 10% to 4%; changes in mud content and organic matter were small. Compared with the reference site, the borrow site experienced larger variations in mud and carbonate content. An additional 228 sub-samples were gathered from small cores collected at 5 fixed stations in the borrow site and 1 fixed station at the reference site 0, 3, 6, 9, and 12 months after the dredging; these down-core sub-samples were divided into 1-cm slices and analyzed using a laser diffraction particle size analyzer. Most cores were uniform vertically and consisted of fine sand with well to moderately well sorting and nearly symmetrical averaged skewness. Based on the analysis of grain size populations, 2 phi- and 3 phi-sized sediments were the most dynamic sand fractions in PRS. Mud deposition on shoals offshore of PRS presumably happens when offshore mud transport is prevalent and there is a following rapid sand accumulation to bury the mud. However, in this borrow site there was very little accumulation of mud. This will allow the site to be used in future nourishment projects presuming no accumulation of mud occurs in the future.

  4. Calibrated Tully-Fisher relations for improved estimates of disc rotation velocities

    NASA Astrophysics Data System (ADS)

    Reyes, R.; Mandelbaum, R.; Gunn, J. E.; Pizagno, J.; Lackner, C. N.

    2011-11-01

    In this paper, we derive scaling relations between photometric observable quantities and disc galaxy rotation velocity Vrot or Tully-Fisher relations (TFRs). Our methodology is dictated by our purpose of obtaining purely photometric, minimal-scatter estimators of Vrot applicable to large galaxy samples from imaging surveys. To achieve this goal, we have constructed a sample of 189 disc galaxies at redshifts z < 0.1 with long-slit Hα spectroscopy from Pizagno et al. and new observations. By construction, this sample is a fair subsample of a large, well-defined parent disc sample of ˜170 000 galaxies selected from the Sloan Digital Sky Survey Data Release 7 (SDSS DR7). The optimal photometric estimator of Vrot we find is stellar mass M★ from Bell et al., based on the linear combination of a luminosity and a colour. Assuming a Kroupa initial mass function (IMF), we find: log [V80/(km s-1)] = (2.142 ± 0.004) + (0.278 ± 0.010)[log (M★/M⊙) - 10.10], where V80 is the rotation velocity measured at the radius R80 containing 80 per cent of the i-band galaxy light. This relation has an intrinsic Gaussian scatter ? dex and a measured scatter σmeas= 0.056 dex in log V80. For a fixed IMF, we find that the dynamical-to-stellar mass ratios within R80, (Mdyn/M★)(R80), decrease from approximately 10 to 3, as stellar mass increases from M★≈ 109 to 1011 M⊙. At a fixed stellar mass, (Mdyn/M★)(R80) increases with disc size, so that it correlates more tightly with stellar surface density than with stellar mass or disc size alone. We interpret the observed variation in (Mdyn/M★)(R80) with disc size as a reflection of the fact that disc size dictates the radius at which Mdyn/M★ is measured, and consequently, the fraction of the dark matter 'seen' by the gas at that radius. For the lowest M★ galaxies, we find a positive correlation between TFR residuals and disc sizes, indicating that the total density profile is dominated by dark matter on these scales. For the highest M★ galaxies, we find instead a weak negative correlation, indicating a larger contribution of stars to the total density profile. This change in the sense of the correlation (from positive to negative) is consistent with the decreasing trend in (Mdyn/M★)(R80) with stellar mass. In future work, we will use these results to study disc galaxy formation and evolution and perform a fair, statistical analysis of the dynamics and masses of a photometrically selected sample of disc galaxies.

  5. Using re-randomization to increase the recruitment rate in clinical trials - an assessment of three clinical areas.

    PubMed

    Kahan, Brennan C

    2016-12-13

    Patient recruitment in clinical trials is often challenging, and as a result, many trials are stopped early due to insufficient recruitment. The re-randomization design allows patients to be re-enrolled and re-randomized for each new treatment episode that they experience. Because it allows multiple enrollments for each patient, this design has been proposed as a way to increase the recruitment rate in clinical trials. However, it is unknown to what extent recruitment could be increased in practice. We modelled the expected recruitment rate for parallel-group and re-randomization trials in different settings based on estimates from real trials and datasets. We considered three clinical areas: in vitro fertilization, severe asthma exacerbations, and acute sickle cell pain crises. We compared the two designs in terms of the expected time to complete recruitment, and the sample size recruited over a fixed recruitment period. Across the different scenarios we considered, we estimated that re-randomization could reduce the expected time to complete recruitment by between 4 and 22 months (relative reductions of 19% and 45%), or increase the sample size recruited over a fixed recruitment period by between 29% and 171%. Re-randomization can increase recruitment most for trials with a short follow-up period, a long trial recruitment duration, and patients with high rates of treatment episodes. Re-randomization has the potential to increase the recruitment rate in certain settings, and could lead to quicker and more efficient trials in these scenarios.

  6. System-size convergence of point defect properties: The case of the silicon vacancy

    NASA Astrophysics Data System (ADS)

    Corsetti, Fabiano; Mostofi, Arash A.

    2011-07-01

    We present a comprehensive study of the vacancy in bulk silicon in all its charge states from 2+ to 2-, using a supercell approach within plane-wave density-functional theory, and systematically quantify the various contributions to the well-known finite size errors associated with calculating formation energies and stable charge state transition levels of isolated defects with periodic boundary conditions. Furthermore, we find that transition levels converge faster with respect to supercell size when only the Γ-point is sampled in the Brillouin zone, as opposed to a dense k-point sampling. This arises from the fact that defect level at the Γ-point quickly converges to a fixed value which correctly describes the bonding at the defect center. Our calculated transition levels with 1000-atom supercells and Γ-point only sampling are in good agreement with available experimental results. We also demonstrate two simple and accurate approaches for calculating the valence band offsets that are required for computing formation energies of charged defects, one based on a potential averaging scheme and the other using maximally-localized Wannier functions (MLWFs). Finally, we show that MLWFs provide a clear description of the nature of the electronic bonding at the defect center that verifies the canonical Watkins model.

  7. Survival distributions impact the power of randomized placebo-phase design and parallel groups randomized clinical trials.

    PubMed

    Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M

    2011-03-01

    The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. The Impact of Environment on the Stellar Mass–Halo Mass Relation

    NASA Astrophysics Data System (ADS)

    Golden-Marx, Jesse B.; Miller, Christopher J.

    2018-06-01

    A large variance exists in the amplitude of the stellar mass–halo mass (SMHM) relation for group- and cluster-size halos. Using a sample of 254 clusters, we show that the magnitude gap between the brightest central galaxy (BCG) and its second or fourth brightest neighbor accounts for a significant portion of this variance. We find that at fixed halo mass, galaxy clusters with a larger magnitude gap have a higher BCG stellar mass. This relationship is also observed in semi-analytic representations of low-redshift galaxy clusters in simulations. This SMHM–magnitude gap stratification likely results from BCG growth via hierarchical mergers and may link the assembly of the halo with the growth of the BCG. Using a Bayesian model, we quantify the importance of the magnitude gap in the SMHM relation using a multiplicative stretch factor, which we find to be significantly non-zero. The inclusion of the magnitude gap in the SMHM relation results in a large reduction in the inferred intrinsic scatter in the BCG stellar mass at fixed halo mass. We discuss the ramifications of this result in the context of galaxy formation models of centrals in group- and cluster-size halos.

  9. Terrestrial-passage theory: failing a test.

    PubMed

    Reed, Charles F; Krupinski, Elizabeth A

    2009-01-01

    Terrestrial-passage theory proposes that the 'moon' and 'sky' illusions occur because observers learn to expect an elevation-dependent transformation of visual angle. The transformation accompanies daily movement through ordinary environments of fixed-altitude objects. Celestial objects display the same visual angle at all elevations, and hence are necessarily non-conforming with the ordinary transformation. On hypothesis, observers should target angular sizes to appear greater at elevation than at horizon. However, in a sample of forty-eight observers there was no significant difference between the perceived angular size of a constellation of stars at horizon and that predicted for a specific elevation. Occurrence of the illusion was not restricted to those observers who expected angular expansion. These findings fail to support the terrestrial-passage theory of the illusion.

  10. Divergent estimation error in portfolio optimization and in linear regression

    NASA Astrophysics Data System (ADS)

    Kondor, I.; Varga-Haszonits, I.

    2008-08-01

    The problem of estimation error in portfolio optimization is discussed, in the limit where the portfolio size N and the sample size T go to infinity such that their ratio is fixed. The estimation error strongly depends on the ratio N/T and diverges for a critical value of this parameter. This divergence is the manifestation of an algorithmic phase transition, it is accompanied by a number of critical phenomena, and displays universality. As the structure of a large number of multidimensional regression and modelling problems is very similar to portfolio optimization, the scope of the above observations extends far beyond finance, and covers a large number of problems in operations research, machine learning, bioinformatics, medical science, economics, and technology.

  11. Autonomous reinforcement learning with experience replay.

    PubMed

    Wawrzyński, Paweł; Tanwani, Ajay Kumar

    2013-05-01

    This paper considers the issues of efficiency and autonomy that are required to make reinforcement learning suitable for real-life control tasks. A real-time reinforcement learning algorithm is presented that repeatedly adjusts the control policy with the use of previously collected samples, and autonomously estimates the appropriate step-sizes for the learning updates. The algorithm is based on the actor-critic with experience replay whose step-sizes are determined on-line by an enhanced fixed point algorithm for on-line neural network training. An experimental study with simulated octopus arm and half-cheetah demonstrates the feasibility of the proposed algorithm to solve difficult learning control problems in an autonomous way within reasonably short time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Scalable problems and memory bounded speedup

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Ni, Lionel M.

    1992-01-01

    In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time speedup and memory-bounded speedup. The latter two consider the relationship between speedup and problem scalability. Two sets of speedup formulations are derived for these three models. One set considers uneven workload allocation and communication overhead and gives more accurate estimation. Another set considers a simplified case and provides a clear picture on the impact of the sequential portion of an application on the possible performance gain from parallel processing. The simplified fixed-size speedup is Amdahl's law. The simplified fixed-time speedup is Gustafson's scaled speedup. The simplified memory-bounded speedup contains both Amdahl's law and Gustafson's scaled speedup as special cases. This study leads to a better understanding of parallel processing.

  13. Understanding Aggregation and Estimating Seasonal Abundance of Chrysaora quinquecirrha Medusae from a Fixed-station Time Series in the Choptank River, Chesapeake Bay

    NASA Astrophysics Data System (ADS)

    Tay, J.; Hood, R. R.

    2016-02-01

    Although jellyfish exert strong control over marine plankton dynamics (Richardson et al. 2009, Robison et al. 2014) and negatively impact human commercial and recreational activities (Purcell et al. 2007, Purcell 2012), jellyfish biomass is not well quantified due primarily to sampling difficulties with plankton nets or fisheries trawls (Haddock 2004). As a result, some of the longest records of jellyfish are visual shore-based surveys, such as the fixed-station time series of Chrysaora quinquecirrha that began in 1960 in the Patuxent River in Chesapeake Bay, USA (Cargo and King 1990). Time series counts from fixed-station surveys capture two signals: 1) demographic change at timescales on the order of reproductive processes and 2) spatial patchiness at shorter timescales as different parcels of water move in and out of the survey area by tidal and estuarine advection and turbulent mixing (Lee and McAlice 1979). In this study, our goal was to separate these two signals using a 4-year time series of C. quinquecirrha medusa counts from a fixed-station in the Choptank River, Chesapeake Bay. Idealized modeling of tidal and estuarine advection was used to conceptualize the sampling scheme. Change point and time series analysis was used to detect demographic changes. Indices of aggregation (Negative Binomial coefficient, Taylor's Power Law coefficient, and Morisita's Index) were calculated to describe the spatial patchiness of the medusae. Abundance estimates revealed a bloom cycle that differed in duration and magnitude for each of the study years. Indices of aggregation indicated that medusae were aggregated and that patches grew in the number of individuals, and likely in size, as abundance increased. Further inference from the conceptual modeling suggested that medusae patch structure was generally homogenous over the tidal extent. This study highlights the benefits of using fixed-station shore-based surveys for understanding the biology and ecology of jellyfish.

  14. A comparison of two sampling designs for fish assemblage assessment in a large river

    USGS Publications Warehouse

    Kiraly, Ian A.; Coghlan, Stephen M.; Zydlewski, Joseph D.; Hayes, Daniel

    2014-01-01

    We compared the efficiency of stratified random and fixed-station sampling designs to characterize fish assemblages in anticipation of dam removal on the Penobscot River, the largest river in Maine. We used boat electrofishing methods in both sampling designs. Multiple 500-m transects were selected randomly and electrofished in each of nine strata within the stratified random sampling design. Within the fixed-station design, up to 11 transects (1,000 m) were electrofished, all of which had been sampled previously. In total, 88 km of shoreline were electrofished during summer and fall in 2010 and 2011, and 45,874 individuals of 34 fish species were captured. Species-accumulation and dissimilarity curve analyses indicated that all sampling effort, other than fall 2011 under the fixed-station design, provided repeatable estimates of total species richness and proportional abundances. Overall, our sampling designs were similar in precision and efficiency for sampling fish assemblages. The fixed-station design was negatively biased for estimating the abundance of species such as Common Shiner Luxilus cornutus and Fallfish Semotilus corporalis and was positively biased for estimating biomass for species such as White Sucker Catostomus commersonii and Atlantic Salmon Salmo salar. However, we found no significant differences between the designs for proportional catch and biomass per unit effort, except in fall 2011. The difference observed in fall 2011 was due to limitations on the number and location of fixed sites that could be sampled, rather than an inherent bias within the design. Given the results from sampling in the Penobscot River, application of the stratified random design is preferable to the fixed-station design due to less potential for bias caused by varying sampling effort, such as what occurred in the fall 2011 fixed-station sample or due to purposeful site selection.

  15. A Real Options Approach to Quantity and Cost Optimization for Lifetime and Bridge Buys of Parts

    DTIC Science & Technology

    2015-04-30

    fixed EOS of 40 years and a fixed WACC of 3%, decreases to a minimum and then increases. The minimum of this curve gives the optimum buy size for...considered in both analyses. For a 3% WACC , as illustrated in Figure 9(a), the DES method gives an optimum buy size range of 2,923–3,191 with an average...Hence, both methods are consistent in determining the optimum lifetime/bridge buy size. To further verify this consistency, other WACC values

  16. Simulation of design-unbiased point-to-particle sampling compared to alternatives on plantation rows

    Treesearch

    Thomas B. Lynch; David Hamlin; Mark J. Ducey

    2016-01-01

    Total quantities of tree attributes can be estimated in plantations by sampling on plantation rows using several methods. At random sample points on a row, either fixed row lengths or variable row lengths with a fixed number of sample trees can be assessed. Ratio of means or mean of ratios estimators can be developed for the fixed number of trees option but are not...

  17. An effective detection algorithm for region duplication forgery in digital images

    NASA Astrophysics Data System (ADS)

    Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin

    2016-04-01

    Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.

  18. Effect of distance-related heterogeneity on population size estimates from point counts

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.

    2009-01-01

    Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.

  19. The relationship of motor unit size, firing rate and force.

    PubMed

    Conwit, R A; Stashuk, D; Tracy, B; McHugh, M; Brown, W F; Metter, E J

    1999-07-01

    Using a clinical electromyographic (EMG) protocol, motor units were sampled from the quadriceps femoris during isometric contractions at fixed force levels to examine how average motor unit size and firing rate relate to force generation. Mean firing rates (mFRs) and sizes (mean surface-detected motor unit action potential (mS-MUAP) area) of samples of active motor units were assessed at various force levels in 79 subjects. MS-MUAP size increased linearly with increased force generation, while mFR remained relatively constant up to 30% of a maximal force and increased appreciably only at higher force levels. A relationship was found between muscle force and mS-MUAP area (r2 = 0.67), mFR (r2 = 0.38), and the product of mS-MUAP area and mFR (mS-MUAP x mFR) (r2 = 0.70). The results support the hypothesis that motor units are recruited in an orderly manner during forceful contractions, and that in large muscles only at higher levels of contraction ( > 30% MVC) do mFRs increase appreciably. MS-MUAP and mFR can be assessed using clinical EMG techniques and they may provide a physiological basis for analyzing the role of motor units during muscle force generation.

  20. Effect of geometric size on mechanical properties of dielectric elastomers based on an improved visco-hyperelastic film model

    NASA Astrophysics Data System (ADS)

    Chang, Mengzhou; Wang, Zhenqing; Tong, Liyong; Liang, Wenyan

    2017-03-01

    Dielectric polymers show complex mechanical behaviors with different boundary conditions, geometry size and pre-stress. A viscoelastic model suitable for inhomogeneous deformation is presented integrating the Kelvin-Voigt model in a new form in this work. For different types of uniaxial tensile test loading along the length direction of sample, single-step-relaxation tests, loading-unloading tests and tensile-creep-relaxation tests the improved model provides a quite favorable comparison with the experiment results. Moreover, The mechanical properties of test sample with several length-width ratios under different boundary conditions are also invested. The influences of the different boundary conditions are calculated with a stress applied on the boundary point and the result show that the fixed boundary will increase the stress compare with homogeneous deformation. In modeling the effect of pre-stress in the shear test, three pre-stressed mode are discussed. The model validation on the general mechanical behavior shows excellent predictive capability.

  1. Experimental light scattering by small particles: system design and calibration

    NASA Astrophysics Data System (ADS)

    Maconi, Göran; Kassamakov, Ivan; Penttilä, Antti; Gritsevich, Maria; Hæggström, Edward; Muinonen, Karri

    2017-06-01

    We describe a setup for precise multi-angular measurements of light scattered by mm- to μm-sized samples. We present a calibration procedure that ensures accurate measurements. Calibration is done using a spherical sample (d = 5 mm, n = 1.517) fixed on a static holder. The ultimate goal of the project is to allow accurate multi-wavelength measurements (the full Mueller matrix) of single-particle samples which are levitated ultrasonically. The system comprises a tunable multimode Argon-krypton laser, with 12 wavelengths ranging from 465 to 676 nm, a linear polarizer, a reference photomultiplier tube (PMT) monitoring beam intensity, and several PMT:s mounted radially towards the sample at an adjustable radius. The current 150 mm radius allows measuring all azimuthal angles except for ±4° around the backward scattering direction. The measurement angle is controlled by a motor-driven rotational stage with an accuracy of 15'.

  2. Resonant frequency analysis of Timoshenko nanowires with surface stress for different boundary conditions

    NASA Astrophysics Data System (ADS)

    He, Qilu; Lilley, Carmen M.

    2012-10-01

    The influence of both surface and shear effects on the resonant frequency of nanowires (NWs) was studied by incorporating the Young-Laplace equation with the Timoshenko beam theory. Face-centered-cubic metal NWs were studied. A dimensional analysis of the resonant frequencies for fixed-fixed gold (100) NWs were compared to molecular dynamic simulations. Silver NWs with diameters from 10 nm-500 nm were modeled as a cantilever, simply supported and fixed-fixed system for aspect ratios from 2.5-20 to identify the shear, surface, and size effects on the resonant frequencies. The shear effect was found to have a larger significance than surface effects when the aspect ratios were small (i.e., <5) regardless of size for the diameters modeled. Finally, as the aspect ratio grows, the surface effect becomes significant for the smaller diameter NWs.

  3. Protocol for Microplastics Sampling on the Sea Surface and Sample Analysis

    PubMed Central

    Kovač Viršek, Manca; Palatinus, Andreja; Koren, Špela; Peterlin, Monika; Horvat, Petra; Kržan, Andrej

    2016-01-01

    Microplastic pollution in the marine environment is a scientific topic that has received increasing attention over the last decade. The majority of scientific publications address microplastic pollution of the sea surface. The protocol below describes the methodology for sampling, sample preparation, separation and chemical identification of microplastic particles. A manta net fixed on an »A frame« attached to the side of the vessel was used for sampling. Microplastic particles caught in the cod end of the net were separated from samples by visual identification and use of stereomicroscopes. Particles were analyzed for their size using an image analysis program and for their chemical structure using ATR-FTIR and micro FTIR spectroscopy. The described protocol is in line with recommendations for microplastics monitoring published by the Marine Strategy Framework Directive (MSFD) Technical Subgroup on Marine Litter. This written protocol with video guide will support the work of researchers that deal with microplastics monitoring all over the world. PMID:28060297

  4. Protocol for Microplastics Sampling on the Sea Surface and Sample Analysis.

    PubMed

    Kovač Viršek, Manca; Palatinus, Andreja; Koren, Špela; Peterlin, Monika; Horvat, Petra; Kržan, Andrej

    2016-12-16

    Microplastic pollution in the marine environment is a scientific topic that has received increasing attention over the last decade. The majority of scientific publications address microplastic pollution of the sea surface. The protocol below describes the methodology for sampling, sample preparation, separation and chemical identification of microplastic particles. A manta net fixed on an »A frame« attached to the side of the vessel was used for sampling. Microplastic particles caught in the cod end of the net were separated from samples by visual identification and use of stereomicroscopes. Particles were analyzed for their size using an image analysis program and for their chemical structure using ATR-FTIR and micro FTIR spectroscopy. The described protocol is in line with recommendations for microplastics monitoring published by the Marine Strategy Framework Directive (MSFD) Technical Subgroup on Marine Litter. This written protocol with video guide will support the work of researchers that deal with microplastics monitoring all over the world.

  5. Crystal growth in zinc borosilicate glasses

    NASA Astrophysics Data System (ADS)

    Kullberg, Ana T. G.; Lopes, Andreia A. S.; Veiga, João P. B.; Monteiro, Regina C. C.

    2017-01-01

    Glass samples with a molar composition (64+x)ZnO-(16-x)B2O3-20SiO2, where x=0 or 1, were successfully synthesized using a melt-quenching technique. Based on differential thermal analysis data, the produced glass samples were submitted to controlled heat-treatments at selected temperatures (610, 615 and 620 °C) during various times ranging from 8 to 30 h. The crystallization of willemite (Zn2SiO4) within the glass matrix was confirmed by means of X-ray diffraction (XRD) and scanning electron microscopy (SEM). Under specific heat-treatment conditions, transparent nanocomposite glass-ceramics were obtained, as confirmed by UV-vis spectroscopy. The influence of temperature, holding time and glass composition on crystal growth was investigated. The mean crystallite size was determined by image analysis on SEM micrographs. The results indicated an increase on the crystallite size and density with time and temperature. The change of crystallite size with time for the heat-treatments at 615 and 620 °C depended on the glass composition. Under fixed heat-treatment conditions, the crystallite density was comparatively higher for the glass composition with higher ZnO content.

  6. Physical and environmental factors affecting the persistence of explosives particles (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Papantonakis, Michael R.; Nguyen, Viet K.; Furstenberg, Robert; White, Caitlyn; Shuey, Melissa; Kendziora, Christopher A.; McGill, R. Andrew

    2017-05-01

    Knowledge of the persistence of trace explosives materials is critical to aid the security community in designing detection methods and equipment. The physical and environmental factors affecting the lifetimes of particles include temperature, airflow, interparticle distance, adlayers, humidity, particle field size and vapor pressure. We are working towards a complete particle persistence model that captures the relative importance of these effects to allow the user, with known environmental conditions, to predict particle lifetimes for explosives or other chemicals. In this work, particles of explosives are sieved onto smooth glass substrates using particle sizes and loadings relevant to those deposited by fingerprint deposition. The coupon is introduced into a custom flow cell and monitored under controlled airflow, humidity and temperature. Photomicroscopy images of the sample taken at fixed time intervals are analyzed to monitor particle sublimation and characterized as a size-independent radial sublimation velocity for each particle in the ensemble. In this paper we build on previous work by comparing the relationship between sublimation of different materials and their vapor pressures. We also describe the influence of a sebum adlayer on particle sublimation, allowing us to better model `real world' samples.

  7. Differential establishment and maintenance of oral ethanol reinforced behavior in Lewis and Fischer 344 inbred rat strains.

    PubMed

    Suzuki, T; George, F R; Meisch, R A

    1988-04-01

    Oral ethanol self-administration was investigated systematically in two inbred strains of rats, Fischer 344 CDF (F-344)/CRLBR (F344) and Lewis LEW/CRLBR (LEW). For both strains ethanol maintained higher response rates and was consumed in larger volumes than the water vehicle. In addition, blood ethanol levels increased with increases in ethanol concentration. However, LEW rats drank substantially more ethanol than F344 rats. The typical inverted U-shaped function between ethanol concentration and number of deliveries was observed for the LEW rats, whereas for the F344 rats much smaller differences were seen between ethanol and water maintained responding. For the LEW strain, as the fixed-ratio size was increased, the number of responses increased almost in direct proportion to the fixed-ratio size increase, so that at least at the lower fixed-ratio values the rats were obtaining similar numbers of deliveries at different fixed-ratio sizes. However, a decrease in ethanol deliveries and blood ethanol levels was observed at higher fixed-ratio sizes. Similar results were obtained in F344 rats, but the amount of responding was lower and less consistent. LEW rats showed significantly higher response rates, numbers of ethanol deliveries and blood ethanol levels. Ethanol-induced behavioral activation also was observed in LEW rats, but not in F344 rats. These results support the conclusion that ethanol serves as a strong positive reinforcer for LEW rats and as a weak positive reinforcer for F344 rats, and that genotype is a determinant of the degree to which ethanol functions as a reinforcer.

  8. Little Evidence That Time in Child Care Causes Externalizing Problems During Early Childhood in Norway

    PubMed Central

    Zachrisson, Henrik Daae; Dearing, Eric; Lekhal, Ratib; Toppelberg, Claudio O.

    2012-01-01

    Associations between maternal reports of hours in child care and children’s externalizing problems at 18 and 36 months of age were examined in a population-based Norwegian sample (n = 75,271). Within a sociopolitical context of homogenously high-quality child care, there was little evidence that high quantity of care causes externalizing problems. Using conventional approaches to handling selection bias and listwise deletion for substantial attrition in this sample, more hours in care predicted higher problem levels, yet with small effect sizes. The finding, however, was not robust to using multiple imputation for missing values. Moreover, when sibling and individual fixed-effects models for handling selection bias were used, no relation between hours and problems was evident. PMID:23311645

  9. Hydrocarbon pollution fixed to combined sewer sediment: a case study in Paris.

    PubMed

    Rocher, Vincent; Garnaud, Stéphane; Moilleron, Régis; Chebbo, Ghassan

    2004-02-01

    Over a period of two years (2000-2001), sediment samples were extracted from 40 silt traps (STs) spread through the combined sewer system of Paris. All sediment samples were analysed for physico-chemical parameters (pH, organic matter content, grain size distribution), with total hydrocarbons (THs) and 16 polycyclic aromatic hydrocarbons (PAHs) selected from the priority list of the US-EPA. The two main objectives of the study were (1) to determine the hydrocarbon contamination levels in the sediments of the Paris combined sewer system and (2) to investigate the PAH fingerprints in order to assess their spatial variability and to elucidate the PAH origins. The results show that there is some important inter-site and intra-site variations in hydrocarbon contents. Despite this variability, TH and PAH contamination levels (50th percentile) in the Parisian sewer sediment are estimated at 530 and 18 microg g(-1), respectively. The investigation of the aromatic compound distributions in all of the 40 STs has underlined that there is, at the Paris sewer system scale, a homogeneous PAH background pollution. Moreover, the study of the PAH fingerprints, using specific ratios, suggests the predominance of a pyrolytic origin for those PAHs fixed to the sewer sediment.

  10. Study of density distribution in a near-critical simple fluid (19-IML-1)

    NASA Technical Reports Server (NTRS)

    Michels, Teun

    1992-01-01

    This experiment uses visual observation, interferometry, and light scattering techniques to observe and analyze the density distribution in SF6 above and below the critical temperature. Below the critical temperature, the fluid system is split up into two coexisting phases, liquid and vapor. The spatial separation of these phases on earth, liquid below and vapor above, is not an intrinsic property of the fluid system; it is merely an effect of the action of the gravity field. At a fixed temperature, the density of each of the coexisting phases is in principle fixed. However, near T sub c where the fluid is strongly compressible, gravity induced hydrostatic forces will result in a gradual decrease in density with increasing height in the sample container. This hydrostatic density profile is even more pronounced in the one phase fluid at temperatures slightly above T sub c. The experiment is set up to study the intrinsic density distributions and equilibration rates of a critical sample in a small container. Interferometry will be used to determine local density and thickness of surface and interface layers. The light scattering data will reveal the size of the density fluctuations on a microscopic scale.

  11. Low- Z polymer sample supports for fixed-target serial femtosecond X-ray crystallography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feld, Geoffrey K.; Heymann, Michael; Benner, W. Henry

    X-ray free-electron lasers (XFELs) offer a new avenue to the structural probing of complex materials, including biomolecules. Delivery of precious sample to the XFEL beam is a key consideration, as the sample of interest must be serially replaced after each destructive pulse. The fixed-target approach to sample delivery involves depositing samples on a thin-film support and subsequent serial introduction via a translating stage. Some classes of biological materials, including two-dimensional protein crystals, must be introduced on fixed-target supports, as they require a flat surface to prevent sample wrinkling. A series of wafer and transmission electron microscopy (TEM)-style grid supports constructedmore » of low- Z plastic have been custom-designed and produced. Aluminium TEM grid holders were engineered, capable of delivering up to 20 different conventional or plastic TEM grids using fixed-target stages available at the Linac Coherent Light Source (LCLS). As proof-of-principle, X-ray diffraction has been demonstrated from two-dimensional crystals of bacteriorhodopsin and three-dimensional crystals of anthrax toxin protective antigen mounted on these supports at the LCLS. In conclusion, the benefits and limitations of these low- Z fixed-target supports are discussed; it is the authors' belief that they represent a viable and efficient alternative to previously reported fixed-target supports for conducting diffraction studies with XFELs.« less

  12. A PDA-based flexible telecommunication system for telemedicine applications.

    PubMed

    Nazeran, Homer; Setty, Sunil; Haltiwanger, Emily; Gonzalez, Virgilio

    2004-01-01

    Technology has been used to deliver health care at a distance for many years. Telemedicine is a rapidly growing area and recently there are studies devoted to prehospital care of patients in emergency cases. In this work we have developed a compact, reliable, and low cost PDA-based telecommunication device for telemedicine applications to transmit audio, still images, and vital signs from a remote site to a fixed station such as a clinic or a hospital in real time. This was achieved based on a client-server architecture. A Pocket PC, a miniature camera, and a hands-free microphone were used at the client site and a desktop computer running the Windows XP operating system was used as a server. The server was located at a fixed station. The system was implemented on TCP/IP and HTTP protocol. Field tests have shown that the system can reliably transmit still images, audio, and sample vital signs from a simulated remote site to a fixed station either via a wired or wireless network in real time. The Pocket PC was used at the client site because of its compact size, low cost and processing capabilities.

  13. After site selection and before data analysis: sampling, sorting, and laboratory procedures used in stream benthic macroinvertebrate monitoring programs by USA state agencies

    USGS Publications Warehouse

    Carter, James L.; Resh, Vincent H.

    2001-01-01

    A survey of methods used by US state agencies for collecting and processing benthic macroinvertebrate samples from streams was conducted by questionnaire; 90 responses were received and used to describe trends in methods. The responses represented an estimated 13,000-15,000 samples collected and processed per year. Kicknet devices were used in 64.5% of the methods; other sampling devices included fixed-area samplers (Surber and Hess), artificial substrates (Hester-Dendy and rock baskets), grabs, and dipnets. Regional differences existed, e.g., the 1-m kicknet was used more often in the eastern US than in the western US. Mesh sizes varied among programs but 80.2% of the methods used a mesh size between 500 and 600 (mu or u)m. Mesh size variations within US Environmental Protection Agency regions were large, with size differences ranging from 100 to 700 (mu or u)m. Most samples collected were composites; the mean area sampled was 1.7 m2. Samples rarely were collected using a random method (4.7%); most samples (70.6%) were collected using "expert opinion", which may make data obtained operator-specific. Only 26.3% of the methods sorted all the organisms from a sample; the remainder subsampled in the laboratory. The most common method of subsampling was to remove 100 organisms (range = 100-550). The magnification used for sorting ranged from 1 (sorting by eye) to 30x, which results in inconsistent separation of macroinvertebrates from detritus. In addition to subsampling, 53% of the methods sorted large/rare organisms from a sample. The taxonomic level used for identifying organisms varied among taxa; Ephemeroptera, Plecoptera, and Trichoptera were generally identified to a finer taxonomic resolution (genus and species) than other taxa. Because there currently exists a large range of field and laboratory methods used by state programs, calibration among all programs to increase data comparability would be exceptionally challenging. However, because many techniques are shared among methods, limited testing could be designed to evaluate whether procedural differences affect the ability to determine levels of environmental impairment using benthic macroinvertebrate communities.

  14. New microfluidic-based sampling procedure for overcoming the hematocrit problem associated with dried blood spot analysis.

    PubMed

    Leuthold, Luc Alexis; Heudi, Olivier; Déglon, Julien; Raccuglia, Marc; Augsburger, Marc; Picard, Franck; Kretz, Olivier; Thomas, Aurélien

    2015-02-17

    Hematocrit (Hct) is one of the most critical issues associated with the bioanalytical methods used for dried blood spot (DBS) sample analysis. Because Hct determines the viscosity of blood, it may affect the spreading of blood onto the filter paper. Hence, accurate quantitative data can only be obtained if the size of the paper filter extracted contains a fixed blood volume. We describe for the first time a microfluidic-based sampling procedure to enable accurate blood volume collection on commercially available DBS cards. The system allows the collection of a controlled volume of blood (e.g., 5 or 10 μL) within several seconds. Reproducibility of the sampling volume was examined in vivo on capillary blood by quantifying caffeine and paraxanthine on 5 different extracted DBS spots at two different time points and in vitro with a test compound, Mavoglurant, on 10 different spots at two Hct levels. Entire spots were extracted. In addition, the accuracy and precision (n = 3) data for the Mavoglurant quantitation in blood with Hct levels between 26% and 62% were evaluated. The interspot precision data were below 9.0%, which was equivalent to that of a manually spotted volume with a pipet. No Hct effect was observed in the quantitative results obtained for Hct levels from 26% to 62%. These data indicate that our microfluidic-based sampling procedure is accurate and precise and that the analysis of Mavoglurant is not affected by the Hct values. This provides a simple procedure for DBS sampling with a fixed volume of capillary blood, which could eliminate the recurrent Hct issue linked to DBS sample analysis.

  15. Miniature intermittent contact switch

    NASA Technical Reports Server (NTRS)

    Sword, A.

    1972-01-01

    Design of electric switch for providing intermittent contact is presented. Switch consists of flexible conductor surrounding, but separated from, fixed conductor. Flexing of outside conductor to contact fixed conductor completes circuit. Advantage is small size of switch compared to standard switches.

  16. COMPARISON OF LABORATORY SUBSAMPLING METHODS OF BENTHIC SAMPLES FROM BOATABLE RIVERS USING ACTUAL AND SIMULATED COUNT DATA

    EPA Science Inventory

    We examined the effects of using a fixed-count subsample of 300 organisms on metric values using macroinvertebrate samples collected with 3 field sampling methods at 12 boatable river sites. For each sample, we used metrics to compare an initial fixed-count subsample of approxima...

  17. Agreement of the Kato-Katz test established by the WHO with samples fixed with sodium acetate analyzed at 6 months to diagnose intestinal geohelminthes.

    PubMed

    Alfredo Fernández-Niño, Julián; David Ramírez, Juan; Consuelo López, Myriam; Inés Moncada, Ligia; Reyes, Patricia; Darío Heredia, Rubén

    2015-06-01

    The aim of this study was to evaluate the performance of the Kato-Katz test (WHO version) with stool samples from a rural area, fixed with sodium acetate (SAF). The Kato-Katz test was used to compare unfixed samples (conventional test) with the same samples containing SAF fixative at time 0 and at 6 months. The study included stools from 154 subjects. A marginally statistically significant decrease in prevalence was estimated only for hookworm, when comparing unfixed samples versus the SAF fixed samples read at 6 months (p=0.06). A significant reduction in parasite load was found for hookworm (p<0.01) and Trichuris trichiura (p<0.01) between the unfixed and the fixed sample read at 6 months, but not for Ascaris lumbricoides (p=0.10). This research suggests that the SAF fixative solution is a good option for transporting samples for diagnosis, especially in rural areas in developing countries. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Experimentally reducing clutch size reveals a fixed upper limit to egg size in snakes, evidence from the king ratsnake, Elaphe carinata.

    PubMed

    Ji, Xiang; Du, Wei-Guo; Li, Hong; Lin, Long-Hui

    2006-08-01

    Snakes are free of the pelvic girdle's constraint on maximum offspring size, and therefore present an opportunity to investigate the upper limit to offspring size without the limit imposed by the pelvic girdle dimension. We used the king ratsnake (Elaphe carinata) as a model animal to examine whether follicle ablation may result in enlargement of egg size in snakes and, if so, whether there is a fixed upper limit to egg size. Females with small sized yolking follicles were assigned to three manipulated, one sham-manipulated and one control treatments in mid-May, and two, four or six yolking follicles in the manipulated females were then ablated. Females undergoing follicle ablation produced fewer, but larger as well as more elongated, eggs than control females primarily by increasing egg length. This finding suggests that follicle ablation may result in enlargement of egg size in E. carinata. Mean values for egg width remained almost unchanged across the five treatments, suggesting that egg width is more likely to be shaped by the morphological feature of the oviduct. Clutch mass dropped dramatically in four- and six-follicle ablated females. The function describing the relationship between size and number of eggs reveals that egg size increases with decreasing clutch size at an ever-decreasing rate, with the tangent slope of the function for the six-follicle ablation treatment being -0.04. According to the function describing instantaneous variation in tangent slope, the maximum value of tangent slope should converge towards zero. This result provides evidence that there is a fixed upper limit to egg size in E. carinata.

  19. Low-dose fixed-target serial synchrotron crystallography.

    PubMed

    Owen, Robin L; Axford, Danny; Sherrell, Darren A; Kuo, Anling; Ernst, Oliver P; Schulz, Eike C; Miller, R J Dwayne; Mueller-Werkmeister, Henrike M

    2017-04-01

    The development of serial crystallography has been driven by the sample requirements imposed by X-ray free-electron lasers. Serial techniques are now being exploited at synchrotrons. Using a fixed-target approach to high-throughput serial sampling, it is demonstrated that high-quality data can be collected from myoglobin crystals, allowing room-temperature, low-dose structure determination. The combination of fixed-target arrays and a fast, accurate translation system allows high-throughput serial data collection at high hit rates and with low sample consumption.

  20. A laser-deposition approach to compositional-spread discovery of materials on conventional sample sizes

    NASA Astrophysics Data System (ADS)

    Christen, Hans M.; Ohkubo, Isao; Rouleau, Christopher M.; Jellison, Gerald E., Jr.; Puretzky, Alex A.; Geohegan, David B.; Lowndes, Douglas H.

    2005-01-01

    Parallel (multi-sample) approaches, such as discrete combinatorial synthesis or continuous compositional-spread (CCS), can significantly increase the rate of materials discovery and process optimization. Here we review our generalized CCS method, based on pulsed-laser deposition, in which the synchronization between laser firing and substrate translation (behind a fixed slit aperture) yields the desired variations of composition and thickness. In situ alloying makes this approach applicable to the non-equilibrium synthesis of metastable phases. Deposition on a heater plate with a controlled spatial temperature variation can additionally be used for growth-temperature-dependence studies. Composition and temperature variations are controlled on length scales large enough to yield sample sizes sufficient for conventional characterization techniques (such as temperature-dependent measurements of resistivity or magnetic properties). This technique has been applied to various experimental studies, and we present here the results for the growth of electro-optic materials (SrxBa1-xNb2O6) and magnetic perovskites (Sr1-xCaxRuO3), and discuss the application to the understanding and optimization of catalysts used in the synthesis of dense forests of carbon nanotubes.

  1. Molecular evidence for species-level distinctions in clouded leopards.

    PubMed

    Buckley-Beason, Valerie A; Johnson, Warren E; Nash, Willliam G; Stanyon, Roscoe; Menninger, Joan C; Driscoll, Carlos A; Howard, JoGayle; Bush, Mitch; Page, John E; Roelke, Melody E; Stone, Gary; Martelli, Paolo P; Wen, Ci; Ling, Lin; Duraisingam, Ratna K; Lam, Phan V; O'Brien, Stephen J

    2006-12-05

    Among the 37 living species of Felidae, the clouded leopard (Neofelis nebulosa) is generally classified as a monotypic genus basal to the Panthera lineage of great cats. This secretive, mid-sized (16-23 kg) carnivore, now severely endangered, is traditionally subdivided into four southeast Asian subspecies (Figure 1A). We used molecular genetic methods to re-evaluate subspecies partitions and to quantify patterns of population genetic variation among 109 clouded leopards of known geographic origin (Figure 1A, Tables S1 ans S2 in the Supplemental Data available online). We found strong phylogeographic monophyly and large genetic distances between N. n. nebulosa (mainland) and N. n. diardi (Borneo; n = 3 individuals) with mtDNA (771 bp), nuclear DNA (3100 bp), and 51 microsatellite loci. Thirty-six fixed mitochondrial and nuclear nucleotide differences and 20 microsatellite loci with nonoverlapping allele-size ranges distinguished N. n. nebulosa from N. n. diardi. Along with fixed subspecies-specific chromosomal differences, this degree of differentiation is equivalent to, or greater than, comparable measures among five recognized Panthera species (lion, tiger, leopard, jaguar, and snow leopard). These distinctions increase the urgency of clouded leopard conservation efforts, and if affirmed by morphological analysis and wider sampling of N. n. diardi in Borneo and Sumatra, would support reclassification of N. n. diardi as a new species (Neofelis diardi).

  2. Detection in fixed and random noise in foveal and parafoveal vision explained by template learning

    NASA Technical Reports Server (NTRS)

    Beard, B. L.; Ahumada, A. J. Jr; Watson, A. B. (Principal Investigator)

    1999-01-01

    Foveal and parafoveal contrast detection thresholds for Gabor and checkerboard targets were measured in white noise by means of a two-interval forced-choice paradigm. Two white-noise conditions were used: fixed and twin. In the fixed noise condition a single noise sample was presented in both intervals of all the trials. In the twin noise condition the same noise sample was used in the two intervals of a trial, but a new sample was generated for each trial. Fixed noise conditions usually resulted in lower thresholds than twin noise. Template learning models are presented that attribute this advantage of fixed over twin noise either to fixed memory templates' reducing uncertainty by incorporation of the noise or to the introduction, by the learning process itself, of more variability in the twin noise condition. Quantitative predictions of the template learning process show that it contributes to the accelerating nonlinear increase in performance with signal amplitude at low signal-to-noise ratios.

  3. Radiofrequency energy deposition and radiofrequency power requirements in parallel transmission with increasing distance from the coil to the sample.

    PubMed

    Deniz, Cem M; Vaidya, Manushka V; Sodickson, Daniel K; Lattanzi, Riccardo

    2016-01-01

    We investigated global specific absorption rate (SAR) and radiofrequency (RF) power requirements in parallel transmission as the distance between the transmit coils and the sample was increased. We calculated ultimate intrinsic SAR (UISAR), which depends on object geometry and electrical properties but not on coil design, and we used it as the reference to compare the performance of various transmit arrays. We investigated the case of fixing coil size and increasing the number of coils while moving the array away from the sample, as well as the case of fixing coil number and scaling coil dimensions. We also investigated RF power requirements as a function of lift-off, and tracked local SAR distributions associated with global SAR optima. In all cases, the target excitation profile was achieved and global SAR (as well as associated maximum local SAR) decreased with lift-off, approaching UISAR, which was constant for all lift-offs. We observed a lift-off value that optimizes the balance between global SAR and power losses in coil conductors. We showed that, using parallel transmission, global SAR can decrease at ultra high fields for finite arrays with a sufficient number of transmit elements. For parallel transmission, the distance between coils and object can be optimized to reduce SAR and minimize RF power requirements associated with homogeneous excitation. © 2015 Wiley Periodicals, Inc.

  4. Water-quality response to a high-elevation wildfire in the Colorado Front Range

    USGS Publications Warehouse

    Mast, M. Alisa; Murphy, Sheila F.; Clow, David W.; Penn, Colin A.; Sexstone, Graham A.

    2016-01-01

    Water quality of the Big Thompson River in the Front Range of Colorado was studied for 2 years following a high-elevation wildfire that started in October 2012 and burned 15% of the watershed. A combination of fixed-interval sampling and continuous water-quality monitors was used to examine the timing and magnitude of water-quality changes caused by the wildfire. Prefire water quality was well characterized because the site has been monitored at least monthly since the early 2000s. Major ions and nitrate showed the largest changes in concentrations; major ion increases were greatest in the first postfire snowmelt period, but nitrate increases were greatest in the second snowmelt period. The delay in nitrate release until the second snowmelt season likely reflected a combination of factors including fire timing, hydrologic regime, and rates of nitrogen transformations. Despite the small size of the fire, annual yields of dissolved constituents from the watershed increased 20–52% in the first 2 years following the fire. Turbidity data from the continuous sensor indicated high-intensity summer rain storms had a much greater effect on sediment transport compared to snowmelt. High-frequency sensor data also revealed that weekly sampling missed the concentration peak during snowmelt and short-duration spikes during rain events, underscoring the challenge of characterizing postfire water-quality response with fixed-interval sampling.

  5. An Efficient MCMC Algorithm to Sample Binary Matrices with Fixed Marginals

    ERIC Educational Resources Information Center

    Verhelst, Norman D.

    2008-01-01

    Uniform sampling of binary matrices with fixed margins is known as a difficult problem. Two classes of algorithms to sample from a distribution not too different from the uniform are studied in the literature: importance sampling and Markov chain Monte Carlo (MCMC). Existing MCMC algorithms converge slowly, require a long burn-in period and yield…

  6. Enhanced removal of sulfonamide antibiotics by KOH-activated anthracite coal: Batch and fixed-bed studies.

    PubMed

    Zuo, Linzi; Ai, Jing; Fu, Heyun; Chen, Wei; Zheng, Shourong; Xu, Zhaoyi; Zhu, Dongqiang

    2016-04-01

    The presence of sulfonamide antibiotics in aquatic environments poses potential risks to human health and ecosystems. In the present study, a highly porous activated carbon was prepared by KOH activation of an anthracite coal (Anth-KOH), and its adsorption properties toward two sulfonamides (sulfamethoxazole and sulfapyridine) and three smaller-sized monoaromatics (phenol, 4-nitrophenol and 1,3-dinitrobenzene) were examined in both batch and fixed-bed adsorption experiments to probe the interplay between adsorbate molecular size and adsorbent pore structure. A commercial powder microporous activated carbon (PAC) and a commercial mesoporous carbon (CMK-3) possessing distinct pore properties were included as comparative adsorbents. Among the three adsorbents Anth-KOH exhibited the largest adsorption capacities for all test adsorbates (especially the two sulfonamides) in both batch mode and fixed-bed mode. After being normalized by the adsorbent surface area, the batch adsorption isotherms of sulfonamides on PAC and Anth-KOH were displaced upward relative to the isotherms on CMK-3, likely due to the micropore-filling effect facilitated by the microporosity of adsorbents. In the fixed-bed mode, the surface area-normalized adsorption capacities of Anth-KOH for sulfonamides were close to that of CMK-3, and higher than that of PAC. The irregular, closed micropores of PAC might impede the diffusion of the relatively large-sized sulfonamide molecules and in turn led to lowered fixed-bed adsorption capacities. The overall superior adsorption of sulfonamides on Anth-KOH can be attributed to its large specific surface area (2514 m(2)/g), high pore volume (1.23 cm(3)/g) and large micropore sizes (centered at 2.0 nm). These findings imply that KOH-activated anthracite coal is a promising adsorbent for the removal of sulfonamide antibiotics from aqueous solution. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Optimizing adaptive design for Phase 2 dose-finding trials incorporating long-term success and financial considerations: A case study for neuropathic pain.

    PubMed

    Gao, Jingjing; Nangia, Narinder; Jia, Jia; Bolognese, James; Bhattacharyya, Jaydeep; Patel, Nitin

    2017-06-01

    In this paper, we propose an adaptive randomization design for Phase 2 dose-finding trials to optimize Net Present Value (NPV) for an experimental drug. We replace the traditional fixed sample size design (Patel, et al., 2012) by this new design to see if NPV from the original paper can be improved. Comparison of the proposed design to the previous design is made via simulations using a hypothetical example based on a Diabetic Neuropathic Pain Study. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. SYSTEM OPTIMIZATION FOR THE AUTOMATIC SIMULTANEOUS DETERMINATION OF ARSENIC, SELENIUM, AND ANTIMONY, USING HYDRIDE GENERATION INTRODUCTION TO AN INDUCTIVELY COUPLED PLASMA.

    USGS Publications Warehouse

    Pyen, Grace S.; Browner, Richard F.; Long, Stephen

    1986-01-01

    A fixed-size simplex has been used to determine the optimum conditions for the simultaneous determination of arsenic, selenium, and antimony by hydride generation and inductively coupled plasma emission spectrometry. The variables selected for the simplex were carrier gas flow rate, rf power, viewing height, and reagent conditions. The detection limit for selenium was comparable to the preoptimized case, but there were twofold and fourfold improvements in the detection limits for arsenic and antimony, respectively. Precision of the technique was assessed with the use of artificially prepared water samples.

  9. Multi-passes warm rolling of AZ31 magnesium alloy, effect on evaluation of texture, microstructure, grain size and hardness

    NASA Astrophysics Data System (ADS)

    Kamran, J.; Hasan, B. A.; Tariq, N. H.; Izhar, S.; Sarwar, M.

    2014-06-01

    In this study the effect of multi-passes warm rolling of AZ31 magnesium alloy on texture, microstructure, grain size variation and hardness of as cast sample (A) and two rolled samples (B & C) taken from different locations of the as-cast ingot was investigated. The purpose was to enhance the formability of AZ31 alloy in order to help manufacturability. It was observed that multi-passes warm rolling (250°C to 350°C) of samples B & C with initial thickness 7.76mm and 7.73 mm was successfully achieved up to 85% reduction without any edge or surface cracks in ten steps with a total of 26 passes. The step numbers 1 to 4 consist of 5, 2, 11 and 3 passes respectively, the remaining steps 5 to 10 were single pass rolls. In each discrete step a fixed roll gap is used in a way that true strain per step increases very slowly from 0.0067 in the first step to 0.7118 in the 26th step. Both samples B & C showed very similar behavior after 26th pass and were successfully rolled up to 85% thickness reduction. However, during 10th step (27th pass) with a true strain value of 0.772 the sample B experienced very severe surface as well as edge cracks. Sample C was therefore not rolled for the 10th step and retained after 26 passes. Both samples were studied in terms of their basal texture, microstructure, grain size and hardness. Sample C showed an equiaxed grain structure after 85% total reduction. The equiaxed grain structure of sample C may be due to the effective involvement of dynamic recrystallization (DRX) which led to formation of these grains with relatively low misorientations with respect to the parent as cast grains. The sample B on the other hand showed a microstructure in which all the grains were elongated along the rolling direction (RD) after 90 % total reduction and DRX could not effectively play its role due to heavy strain and lack of plastic deformation systems. The microstructure of as cast sample showed a near-random texture (mrd 4.3), with average grain size of 44 & micro-hardness of 52 Hv. The grain size of sample B and C was 14μm and 27μm respectively and mrd intensity of basal texture was 5.34 and 5.46 respectively. The hardness of sample B and C came out to be 91 and 66 Hv respectively due to reduction in grain size and followed the well known Hall-Petch relationship.

  10. Surface-water-quality assessment of the Kentucky River Basin, Kentucky; fixed-station network and selected water-quality data, April 1987 through August 1991

    USGS Publications Warehouse

    Griffin, M.S.; Martin, G.R.; White, K.D.

    1994-01-01

    This report describes selected data-collection activities and the associated data collected during the Kentucky River Basin pilot study of the U.S. Geological Survey's National Water-Quality Assessment Program. The data are intended to provide a nationally consistent description and improved understanding of current water quality in the basin. The data were collected at seven fixed stations that represent stream cross sections where constituent transport and water-quality trends can be evaluated. The report includes descriptions of (1) the basin; (2) the design of the fixed-station network; (3) the fixed-station sites; (4) the physical and chemical measurements; (5) the methods of sample collection, processing, and analysis; and (6) the quality-assurance and quality-control procedures. Water-quality data collected at the fixed stations during routine periodic sampling and supplemental high-flow sampling from April 1987 to August 1991 are presented.

  11. Evaluating Quality of Aged Archival Formalin-Fixed Paraffin-Embedded Samples for RNA-Sequencing

    EPA Science Inventory

    Archival formalin-fixed paraffin-embedded (FFPE) samples offer a vast, untapped source of genomic data for biomarker discovery. However, the quality of FFPE samples is often highly variable, and conventional methods to assess RNA quality for RNA-sequencing (RNA-seq) are not infor...

  12. The effects of age-in-block on RNA-seq analysis of archival formalin-fixed paraffin-embedded (FFPE) samples

    EPA Science Inventory

    Archival samples represent a vast resource for identification of chemical and pharmaceutical targets. Previous use of formalin-fixed paraffin-embedded (FFPE) samples has been limited due to changes in RNA introduced by fixation and embedding procedures. Recent advances in RNA-seq...

  13. Sampling Using a Fixed Number of Trees Per Plot

    Treesearch

    Hans T. Schreuder

    2004-01-01

    The fixed number of trees sample design proposed by Jonsson and others (1992) may be dangerous in applications if a probabilistic framework of sampling is desired. The procedure can be seriously biased. Examples are given here.Publication Web Site:http://www.fs.fed.us/rm/pubs/rmrs_rn017.html

  14. Dose-Response Analysis of RNA-Seq Profiles in Archival Formalin-fixed paraffin-embedded (FFPE) Samples

    EPA Science Inventory

    Formalin-fixed paraffin-embedded (FFPE) samples provide a vast untapped resource for chemical safety and translational science. To date, genomic profiling of FFPE samples has been limited by poor RNA quality and inconsistent results with limited utility in dose-response assessmen...

  15. Characterizing the distribution of particles in urban stormwater: advancements through improved sampling technology

    USGS Publications Warehouse

    Selbig, William R.

    2014-01-01

    A new sample collection system was developed to improve the representation of sediment in stormwater by integrating the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of particle size distribution from urban source areas. Collector streets had the lowest median particle diameter of 8 μm, followed by parking lots, arterial streets, feeder streets, and residential and mixed land use (32, 43, 50, 80 and 95 μm, respectively). Results from this study suggest there is no single distribution of particles that can be applied uniformly to runoff in urban environments; however, integrating more of the entire water column during the sample collection can address some of the shortcomings of a fixed-point sampler by reducing variability and bias caused by the stratification of solids in a water column.

  16. Testing models of parental investment strategy and offspring size in ants.

    PubMed

    Gilboa, Smadar; Nonacs, Peter

    2006-01-01

    Parental investment strategies can be fixed or flexible. A fixed strategy predicts making all offspring a single 'optimal' size. Dynamic models predict flexible strategies with more than one optimal size of offspring. Patterns in the distribution of offspring sizes may thus reveal the investment strategy. Static strategies should produce normal distributions. Dynamic strategies should often result in non-normal distributions. Furthermore, variance in morphological traits should be positively correlated with the length of developmental time the traits are exposed to environmental influences. Finally, the type of deviation from normality (i.e., skewed left or right, or platykurtic) should be correlated with the average offspring size. To test the latter prediction, we used simulations to detect significant departures from normality and categorize distribution types. Data from three species of ants strongly support the predicted patterns for dynamic parental investment. Offspring size distributions are often significantly non-normal. Traits fixed earlier in development, such as head width, are less variable than final body weight. The type of distribution observed correlates with mean female dry weight. The overall support for a dynamic parental investment model has implications for life history theory. Predicted conflicts over parental effort, sex investment ratios, and reproductive skew in cooperative breeders follow from assumptions of static parental investment strategies and omnipresent resource limitations. By contrast, with flexible investment strategies such conflicts can be either absent or maladaptive.

  17. Schedule-induced drinking as functions of interpellet interval and draught size in the Java macaque1

    PubMed Central

    Allen, Joseph D.; Kenshalo, Dan R.

    1978-01-01

    Three Java monkeys received food pellets that were assigned by both ascending and descending series of fixed-time schedules whose values varied between 8 and 256 seconds. The draught size dispensed by a concurrently available water-delivery tube was systematically varied between 1.0 and 0.3 milliliter per lick at various fixed-time values during the second and third series determinations. Session water intake was bitonically related to the interpellet interval and was determined by the interaction of (1) the probability of initiating a drinking bout, which fell off at the highest interpellet intervals and, (2) the size of the bout, which increased directly with increases in interpellet interval. Variations in draught size had little effect on total session intakes, but reduced bout size at draught sizes of 0.5 milliliter and below. Thus, a volume-regulation process of schedule-induced drinking operated generally at the session-intake level, but was limited to higher draught sizes at the bout level. PMID:16812093

  18. Schedule-induced drinking as functions of interpellet interval and draught size in the Java macaque.

    PubMed

    Allen, J D; Kenshalo, D R

    1978-09-01

    Three Java monkeys received food pellets that were assigned by both ascending and descending series of fixed-time schedules whose values varied between 8 and 256 seconds. The draught size dispensed by a concurrently available water-delivery tube was systematically varied between 1.0 and 0.3 milliliter per lick at various fixed-time values during the second and third series determinations. Session water intake was bitonically related to the interpellet interval and was determined by the interaction of (1) the probability of initiating a drinking bout, which fell off at the highest interpellet intervals and, (2) the size of the bout, which increased directly with increases in interpellet interval. Variations in draught size had little effect on total session intakes, but reduced bout size at draught sizes of 0.5 milliliter and below. Thus, a volume-regulation process of schedule-induced drinking operated generally at the session-intake level, but was limited to higher draught sizes at the bout level.

  19. UNIFORMLY MOST POWERFUL BAYESIAN TESTS

    PubMed Central

    Johnson, Valen E.

    2014-01-01

    Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829

  20. MapX: 2D XRF for Planetary Exploration - Image Formation and Optic Characterization

    DOE PAGES

    Sarrazin, P.; Blake, D.; Gailhanou, M.; ...

    2018-04-01

    Map-X is a planetary instrument concept for 2D X-Ray Fluorescence (XRF) spectroscopy. The instrument is placed directly on the surface of an object and held in a fixed position during the measurement. The formation of XRF images on the CCD detector relies on a multichannel optic configured for 1:1 imaging and can be analyzed through the point spread function (PSF) of the optic. The PSF can be directly measured using a micron-sized monochromatic X-ray source in place of the sample. Such PSF measurements were carried out at the Stanford Synchrotron and are compared with ray tracing simulations. It is shownmore » that artifacts are introduced by the periodicity of the PSF at the channel scale and the proximity of the CCD pixel size and the optic channel size. A strategy of sub-channel random moves was used to cancel out these artifacts and provide a clean experimental PSF directly usable for XRF image deconvolution.« less

  1. MapX: 2D XRF for Planetary Exploration - Image Formation and Optic Characterization

    NASA Astrophysics Data System (ADS)

    Sarrazin, P.; Blake, D.; Gailhanou, M.; Marchis, F.; Chalumeau, C.; Webb, S.; Walter, P.; Schyns, E.; Thompson, K.; Bristow, T.

    2018-04-01

    Map-X is a planetary instrument concept for 2D X-Ray Fluorescence (XRF) spectroscopy. The instrument is placed directly on the surface of an object and held in a fixed position during the measurement. The formation of XRF images on the CCD detector relies on a multichannel optic configured for 1:1 imaging and can be analyzed through the point spread function (PSF) of the optic. The PSF can be directly measured using a micron-sized monochromatic X-ray source in place of the sample. Such PSF measurements were carried out at the Stanford Synchrotron and are compared with ray tracing simulations. It is shown that artifacts are introduced by the periodicity of the PSF at the channel scale and the proximity of the CCD pixel size and the optic channel size. A strategy of sub-channel random moves was used to cancel out these artifacts and provide a clean experimental PSF directly usable for XRF image deconvolution.

  2. MapX: 2D XRF for Planetary Exploration - Image Formation and Optic Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarrazin, P.; Blake, D.; Gailhanou, M.

    Map-X is a planetary instrument concept for 2D X-Ray Fluorescence (XRF) spectroscopy. The instrument is placed directly on the surface of an object and held in a fixed position during the measurement. The formation of XRF images on the CCD detector relies on a multichannel optic configured for 1:1 imaging and can be analyzed through the point spread function (PSF) of the optic. The PSF can be directly measured using a micron-sized monochromatic X-ray source in place of the sample. Such PSF measurements were carried out at the Stanford Synchrotron and are compared with ray tracing simulations. It is shownmore » that artifacts are introduced by the periodicity of the PSF at the channel scale and the proximity of the CCD pixel size and the optic channel size. A strategy of sub-channel random moves was used to cancel out these artifacts and provide a clean experimental PSF directly usable for XRF image deconvolution.« less

  3. Transforce lingual appliances pre-adjusted invisible appliances simplify treatment.

    PubMed

    Clark, William John

    2011-01-01

    Transforce lingual appliances are designed to be used in conjunction with conventional fixed appliances. Lingual arch development is normally followed by bonded fixed appliances to detail the occlusion. Alternatively Transforce appliance treatment is an efficient method of preparing complex malocclusions prior to a finishing stage with invisible appliances. This approach is ideal for adult treatment, using light continuous forces for arch development with appliances that are comfortable to wear. Sagittal and Transverse appliances are designed for arch development in a range of sizes for contracted arches. They can be used to treat all classes of malocclusion and are pre-adjusted fixed/removable devices for non-compliance treatment. Force modules with nickel titanium coil springs enclosed in a tube deliver a gentle, biocompatible continuous force with a long range of action. They are excellent for mixed dentition and ideal for adult arch development. There are multiple sizes for upper and lower arch development and a sizing chart may be placed over a study model for correct selection, eliminating the need for laboratory work.

  4. In-situ X-ray diffraction system using sources and detectors at fixed angular positions

    DOEpatents

    Gibson, David M [Voorheesville, NY; Gibson, Walter M [Voorheesville, NY; Huang, Huapeng [Latham, NY

    2007-06-26

    An x-ray diffraction technique for measuring a known characteristic of a sample of a material in an in-situ state. The technique includes using an x-ray source for emitting substantially divergent x-ray radiation--with a collimating optic disposed with respect to the fixed source for producing a substantially parallel beam of x-ray radiation by receiving and redirecting the divergent paths of the divergent x-ray radiation. A first x-ray detector collects radiation diffracted from the sample; wherein the source and detector are fixed, during operation thereof, in position relative to each other and in at least one dimension relative to the sample according to a-priori knowledge about the known characteristic of the sample. A second x-ray detector may be fixed relative to the first x-ray detector according to the a-priori knowledge about the known characteristic of the sample, especially in a phase monitoring embodiment of the present invention.

  5. Analysis of midpalatal miniscrew-assisted maxillary molar distalization patterns with simultaneous use of fixed appliances: A preliminary study

    PubMed Central

    Mah, Su-Jung; Kim, Ji-Eun; Ahn, Eun Jin; Nam, Jong-Hyun; Kim, Ji-Young

    2016-01-01

    Skeletal anchorage-assisted upper molar distalization has become one of the standard treatment modalities for the correction of Class II malocclusion. The purpose of this study was to analyze maxillary molar movement patterns according to appliance design, with the simultaneous use of buccal fixed orthodontic appliances. The authors devised two distinct types of midpalatal miniscrew-assisted maxillary molar distalizers, a lingual arch type and a pendulum type. Fourteen patients treated with one of the two types of distalizers were enrolled in the study, and the patterns of tooth movement associated with each type were compared. Pre- and post-treatment lateral cephalograms were analyzed. The lingual arch type was associated with relatively bodily upper molar distalization, while the pendulum type was associated with distal tipping with intrusion of the upper molar. Clinicians should be aware of the expected tooth movement associated with each appliance design. Further well designed studies with larger sample sizes are required. PMID:26877983

  6. Fixed dynamometry is more sensitive than vital capacity or ALS rating scale.

    PubMed

    Andres, Patricia L; Allred, Margaret Peggy; Stephens, Helen E; Proffitt Bunnell, Mary; Siener, Catherine; Macklin, Eric A; Haines, Travis; English, Robert A; Fetterman, Katherine A; Kasarskis, Edward J; Florence, Julaine; Simmons, Zachary; Cudkowicz, Merit E

    2017-10-01

    Improved outcome measures are essential to efficiently screen the growing number of potential amyotrophic lateral sclerosis (ALS) therapies. This longitudinal study of 100 (70 male) participants with ALS compared Accurate Test of Limb Isometric Strength (ATLIS), using a fixed, wireless load cell, with ALS Functional Rating Scale-Revised (ALSFRS-R) and vital capacity (VC). Participants enrolled at 5 U.S. sites. Data were analyzed from 66 participants with complete ATLIS, ALSFRS-R, and VC data over at least 3 visits. Change in ATLIS was less variable both within- and among-person than change in ALSFRS-R or VC. Additionally, participants who had normal ALSFRS-R arm and leg function averaged 12 to 32% below expected strength values measured by ATLIS. ATLIS was more sensitive to change than ALSFRS-R or VC and could decrease sample size requirements by approximately one-third. The ability of ATLIS to detect prefunctional change has potential value in early trials. Muscle Nerve 56: 710-715, 2017. © 2017 Wiley Periodicals, Inc.

  7. Bayes factor design analysis: Planning for compelling evidence.

    PubMed

    Schönbrodt, Felix D; Wagenmakers, Eric-Jan

    2018-02-01

    A sizeable literature exists on the use of frequentist power analysis in the null-hypothesis significance testing (NHST) paradigm to facilitate the design of informative experiments. In contrast, there is almost no literature that discusses the design of experiments when Bayes factors (BFs) are used as a measure of evidence. Here we explore Bayes Factor Design Analysis (BFDA) as a useful tool to design studies for maximum efficiency and informativeness. We elaborate on three possible BF designs, (a) a fixed-n design, (b) an open-ended Sequential Bayes Factor (SBF) design, where researchers can test after each participant and can stop data collection whenever there is strong evidence for either [Formula: see text] or [Formula: see text], and (c) a modified SBF design that defines a maximal sample size where data collection is stopped regardless of the current state of evidence. We demonstrate how the properties of each design (i.e., expected strength of evidence, expected sample size, expected probability of misleading evidence, expected probability of weak evidence) can be evaluated using Monte Carlo simulations and equip researchers with the necessary information to compute their own Bayesian design analyses.

  8. Short-term memory for responses: the "choose-small" effect.

    PubMed Central

    Fetterman, J G; MacEwen, D

    1989-01-01

    Pigeons' short-term memory for fixed-ratio requirements was assessed using a delayed symbolic matching-to-sample procedure. Different choices were reinforced after fixed-ratio 10 and fixed-ratio 40 requirements, and delays of 0, 5, or 20 s were sometimes placed between sample ratios and choice. All birds made disproportionate numbers of responses to the small-ratio choice alternative when delays were interposed between ratios and choice, and this bias increased as a function of delay. Preference for the small fixed-ratio alternative was also observed on "no-sample" trials, during which the choice alternatives were presented without a prior sample ratio. This "choose-small" bias is analogous to results obtained by Spetch and Wilkie (1983) with event duration as the discriminative stimulus. The choose-small bias was attenuated when the houselight was turned on during delays, but overall accuracy was not influenced systematically by the houselight manipulation. PMID:2584917

  9. Morphological peculiarities of respiratory compartments of arctic animal lungs.

    PubMed

    Shishkin, G S; Ustyuzhaninova, N V

    1997-04-01

    Morphological and ultrastructural peculiarities of interalveolar septa in endemic arctic animals (reindeer, polar fox, lemming) are compared with laboratory animals (rat,dog). For light microscopy, tissue samples were taken from the central and peripheral sections of all lobes of the right lung. They were fixed in 10% neutral formalin and embedded in paraffin. For electron microscopy, samples were taken from subpleural sections of the caudal lobe of the right lung, fixed in 4% paraformaldehyde for 24 hours, subsequently postfixed in 2% OsO4. for 2.0 hours. Samples were dehydrated in acetone and embedded in a mixture of Epon 812 and Araldite. Ultrathin sections were photographed at a magnification of x4,000. For each interalveolar septum, lengths and diameters were recorded and the squares of septa surface, air-blood barrier surface and the number of the structures were determined. The topography of capillaries and the ultrastructure of interstitium were described. Acini in the arctic animals (reindeer, polar fox, lemming) are compact. In all lobes they are fully expanded and uniformly filled with air. There is no physiological atelectasis. Alveoli appear straight and homogeneous in form and size. In the polar fox, the quantity of interalveolar pores of Kohn is twice that in the dog. The number of pores in the lemming are similar to those in the rat but their size is 1.6 times greater in diameter. In arctic animals more capillaries connect with both alveolar surfaces by an air-blood barrier and simultaneously participate in the gas exchange of two adjoining alveoli. In the polar fox and lemming the thickness of the air-blood barrier is 1.3-1.4 times less than that in the dog and rat. The set of morpho-functional peculiarities of the acini of arctic animals allows for an increase in gas exchange in the respiratory compartments of the lungs and provides necessary oxygenation of arterial blood at a low partial pressure of oxygen in the alveolar gas.

  10. Clinical decision making and the expected value of information.

    PubMed

    Willan, Andrew R

    2007-01-01

    The results of the HOPE study, a randomized clinical trial, provide strong evidence that 1) ramipril prevents the composite outcome of cardiovascular death, myocardial infarction or stroke in patients who are at high risk of a cardiovascular event and 2) ramipril is cost-effective at a threshold willingness-to-pay of $10,000 to prevent an event of the composite outcome. In this report the concept of the expected value of information is used to determine if the information provided by the HOPE study is sufficient for decision making in the US and Canada. and results Using the cost-effectiveness data from a clinical trial, or from a meta-analysis of several trials, one can determine, based on the number of future patients that would benefit from the health technology under investigation, the expected value of sample information (EVSI) of a future trial as a function of proposed sample size. If the EVSI exceeds the cost for any particular sample size then the current information is insufficient for decision making and a future trial is indicated. If, on the other hand, there is no sample size for which the EVSI exceeds the cost, then there is sufficient information for decision making and no future trial is required. Using the data from the HOPE study these concepts are applied for various assumptions regarding the fixed and variable cost of a future trial and the number of patients who would benefit from ramipril. Expected value of information methods provide a decision-analytic alternative to the standard likelihood methods for assessing the evidence provided by cost-effectiveness data from randomized clinical trials.

  11. Improved variance estimation of classification performance via reduction of bias caused by small sample size.

    PubMed

    Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders

    2006-03-13

    Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.

  12. Luminescence study on Eu3+ doped Y2O3 nanoparticles: particle size, concentration and core-shell formation effects

    NASA Astrophysics Data System (ADS)

    Robindro Singh, L.; Ningthoujam, R. S.; Sudarsan, V.; Srivastava, Iti; Dorendrajit Singh, S.; Dey, G. K.; Kulshreshtha, S. K.

    2008-02-01

    Nanoparticles of Eu3+ doped Y2O3 (core) and Eu3+ doped Y2O3 covered with Y2O3 shell (core-shell) are prepared by urea hydrolysis for 3 h in ethylene glycol medium at a relatively low temperature of 140 °C, followed by heating at 500 and 900 °C. Particle sizes determined from x-ray diffraction and transmission electron microscopic studies are 11 and 18 nm for 500 and 900 °C heated samples respectively. Based on the luminescence studies of 500 and 900 °C heated samples, it is confirmed that there is no particle size effect on the peak positions of Eu3+ emission, and optimum luminescence intensity is observed from the nanoparticles with a Eu3+ concentration of 4-5 at.%. A luminescence study establishes that the Eu3+ environment in amorphous Y (OH)3 is different from that in crystalline Y2O3. For a fixed concentration of Eu3+ doping, there is a reduction in Eu3+ emission intensity for core-shell nanoparticles compared to that of core nanoparticles, and this has been attributed to the concentration dilution effect. Energy transfer from the host to Eu3+ increases with increase of crystallinity.

  13. D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.

    PubMed

    Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W

    2005-12-01

    Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.

  14. Reproducibility of subjective appetite ratings and ad libitum test meal energy intake in overweight and obese males.

    PubMed

    Horner, Katy M; Byrne, Nuala M; King, Neil A

    2014-10-01

    To determine whether changes in appetite and energy intake (EI) can be detected and play a role in the effectiveness of interventions, it is necessary to identify their variability under normal conditions. We assessed the reproducibility of subjective appetite ratings and ad libitum test meal EI after a standardised pre-load in overweight and obese males. Fifteen overweight and obese males (BMI 30.3 ± 4.9 kg/m(2), aged 34.9 ± 10.6 years) completed two identical test days, 7 days apart. Participants were provided with a standardised fixed breakfast (1676 kJ) and 5 h later an ad libitum pasta lunch. An electronic appetite rating system was used to assess subjective ratings before and after the fixed breakfast, and periodically during the postprandial period. EI was assessed at the ad libitum lunch meal. Sample size estimates for paired design studies were calculated. Appetite ratings demonstrated a consistent oscillating pattern between test days, and were more reproducible for mean postprandial than fasting ratings. The correlation between ad libitum EI on the two test days was r = 0.78 (P <0.01). Using a paired design and a power of 0.8, a minimum of 12 participants would be needed to detect a 10 mm change in 5 h postprandial mean ratings and 17 to detect a 500 kJ difference in ad libitum EI. Intra-individual variability of appetite and ad libitum test meal EI in overweight and obese males is comparable to previous reports in normal weight adults. Sample size requirements for studies vary depending on the parameter of interest and sensitivity needed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Genetic variation and dopamine D2 receptor availability: a systematic review and meta-analysis of human in vivo molecular imaging studies.

    PubMed

    Gluskin, B S; Mickey, B J

    2016-03-01

    The D2 dopamine receptor mediates neuropsychiatric symptoms and is a target of pharmacotherapy. Inter-individual variation of D2 receptor density is thought to influence disease risk and pharmacological response. Numerous molecular imaging studies have tested whether common genetic variants influence D2 receptor binding potential (BP) in humans, but demonstration of robust effects has been limited by small sample sizes. We performed a systematic search of published human in vivo molecular imaging studies to estimate effect sizes of common genetic variants on striatal D2 receptor BP. We identified 21 studies examining 19 variants in 11 genes. The most commonly studied variant was a single-nucleotide polymorphism in ANKK1 (rs1800497, Glu713Lys, also called 'Taq1A'). Fixed- and random-effects meta-analyses of this variant (5 studies, 194 subjects total) revealed that striatal BP was significantly and robustly lower among carriers of the minor allele (Lys713) relative to major allele homozygotes. The weighted standardized mean difference was -0.57 under the fixed-effect model (95% confidence interval=(-0.87, -0.27), P=0.0002). The normal relationship between rs1800497 and BP was not apparent among subjects with neuropsychiatric diseases. Significant associations with baseline striatal D2 receptor BP have been reported for four DRD2 variants (rs1079597, rs1076560, rs6277 and rs1799732) and a PER2 repeat polymorphism, but none have yet been tested in more than two independent samples. Our findings resolve apparent discrepancies in the literature and establish that rs1800497 robustly influences striatal D2 receptor availability. This genetic variant is likely to contribute to important individual differences in human striatal function, neuropsychiatric disease risk and pharmacological response.

  16. Characterization of Briquette from the Corncob Charcoal and Sago Stem Alloys

    NASA Astrophysics Data System (ADS)

    Lestari, Lina; Inda Variani, Viska; Nyoman Sudiana, I.; Purnama Sari, Dewi; Ode Sitti Ilmawati, Wa; Sahaluddin Hasan, Erzam

    2017-05-01

    The briquettes fabricated from charcoal of corncob (zea mays,L) and sago stem (metroxilon sago rottb) have been produced and characterized. The samples were prepared step by step carefully. The charcoal powder filtered by strainer with mesh size of 70-80 to get the homogeneous particle size. Briquettes are made by mixing corncob charcoal powder, sago stem charcoal and sago adhesive with a mass ratio of 4:5:1, 4.5: 4.5: 1, 5:4:1. The materials are mixed with hot water and stirred to get homogeneous blend. Then they are compacted by pressure of 34.66kg/cm2, 69.32kg/cm2, and 103.98kg/cm2 to form a cylindrical shape with diameter of 4 cm. The cylindrical briquettes then were dried at temperature of 60°C for 48 hours. After dried, the samples where then characterized their density and water, ash, volatile matter, fixed carbon contents. The burning rate, combustion temperature, and ignition time were also determined. The experimental results show that the briquettes have average densities from 0.602 to 0.717gr/cm3. The density increase with the increasing of forming pressure. The increasing of pressure also result in the decreasing of moisture content from 2.669% to 0.842%. The ash content is found from 3.459% to 8.766%. Volatile matter and fixed carbon are varies from 13.658% and 21.168% and 67.667% to 80.758% respectively. The lowest burning rate is 0.0898gr/s and the optimum burning temperature is 499.2°C with the lowest ignition time of 1.58 minutes. These briquette’s parameters agree wit the quality standard of industrial briquette.

  17. Probability of identity by descent in metapopulations.

    PubMed Central

    Kaj, I; Lascoux, M

    1999-01-01

    Equilibrium probabilities of identity by descent (IBD), for pairs of genes within individuals, for genes between individuals within subpopulations, and for genes between subpopulations are calculated in metapopulation models with fixed or varying colony sizes. A continuous-time analog to the Moran model was used in either case. For fixed-colony size both propagule and migrant pool models were considered. The varying population size model is based on a birth-death-immigration (BDI) process, to which migration between colonies is added. Wright's F statistics are calculated and compared to previous results. Adding between-island migration to the BDI model can have an important effect on the equilibrium probabilities of IBD and on Wright's index. PMID:10388835

  18. An exploratory drilling exhaustion sequence plot program

    USGS Publications Warehouse

    Schuenemeyer, J.H.; Drew, L.J.

    1977-01-01

    The exhaustion sequence plot program computes the conditional area of influence for wells in a specified rectangular region with respect to a fixed-size deposit. The deposit is represented by an ellipse whose size is chosen by the user. The area of influence may be displayed on computer printer plots consisting of a maximum of 10,000 grid points. At each point, a symbol is presented that indicates the probability of that point being exhausted by nearby wells with respect to a fixed-size ellipse. This output gives a pictorial view of the manner in which oil fields are exhausted. In addition, the exhaustion data may be used to estimate the number of deposits remaining in a basin. ?? 1977.

  19. Size-confined fixed-composition and composition-dependent engineered band gap alloying induces different internal structures in L-cysteine-capped alloyed quaternary CdZnTeS quantum dots

    NASA Astrophysics Data System (ADS)

    Adegoke, Oluwasesan; Park, Enoch Y.

    2016-06-01

    The development of alloyed quantum dot (QD) nanocrystals with attractive optical properties for a wide array of chemical and biological applications is a growing research field. In this work, size-tunable engineered band gap composition-dependent alloying and fixed-composition alloying were employed to fabricate new L-cysteine-capped alloyed quaternary CdZnTeS QDs exhibiting different internal structures. Lattice parameters simulated based on powder X-ray diffraction (PXRD) revealed the internal structure of the composition-dependent alloyed CdxZnyTeS QDs to have a gradient nature, whereas the fixed-composition alloyed QDs exhibited a homogenous internal structure. Transmission electron microscopy (TEM) and dynamic light scattering (DLS) analysis confirmed the size-confined nature and monodispersity of the alloyed nanocrystals. The zeta potential values were within the accepted range of colloidal stability. Circular dichroism (CD) analysis showed that the surface-capped L-cysteine ligand induced electronic and conformational chiroptical changes in the alloyed nanocrystals. The photoluminescence (PL) quantum yield (QY) values of the gradient alloyed QDs were 27-61%, whereas for the homogenous alloyed QDs, the PL QY values were spectacularly high (72-93%). Our work demonstrates that engineered fixed alloying produces homogenous QD nanocrystals with higher PL QY than composition-dependent alloying.

  20. Particle size and morphology of UHMWPE wear debris in failed total knee arthroplasties--a comparison between mobile bearing and fixed bearing knees.

    PubMed

    Huang, Chun-Hsiung; Ho, Fang-Yuan; Ma, Hon-Ming; Yang, Chan-Tsung; Liau, Jiann-Jong; Kao, Hung-Chan; Young, Tai-Horng; Cheng, Cheng-Kung

    2002-09-01

    Osteolysis induced by ultrahigh molecular weight polyethylene wear debris has been recognized as the major cause of long-term failure in total joint arthroplasties. In a previous study, the prevalence of intraoperatively identified osteolysis during primary revision surgery was much higher in mobile bearing knee replacements (47%) than in fixed bearing knee replacements (13%). We postulated that mobile bearing knee implants tend to produce smaller sized particles. In our current study, we compared the particle size and morphology of polyethylene wear debris between failed mobile bearing and fixed bearing knees. Tissue specimens from interfacial and lytic regions were extracted during revision surgery of 10 mobile bearing knees (all of the low contact stress (LCS) design) and 17 fixed bearing knees (10 of the porous-coated anatomic (PCA) and 7 of the Miller/Galante design). Polyethylene particles were isolated from the tissue specimens and examined using both scanning electron microscopy and light-scattering analyses. The LCS mobile bearing knees produced smaller particulate debris (mean equivalent spherical diameter: 0.58 microm in LCS, 1.17 microm in PCA and 5.23 microm in M/G) and more granular debris (mean value: 93% in LCS, 77% in PCA and 15% in M/G).

  1. Anaerobic treatment of winery wastewater in fixed bed reactors.

    PubMed

    Ganesh, Rangaraj; Rajinikanth, Rajagopal; Thanikal, Joseph V; Ramanujam, Ramamoorty Alwar; Torrijos, Michel

    2010-06-01

    The treatment of winery wastewater in three upflow anaerobic fixed-bed reactors (S9, S30 and S40) with low density floating supports of varying size and specific surface area was investigated. A maximum OLR of 42 g/l day with 80 +/- 0.5% removal efficiency was attained in S9, which had supports with the highest specific surface area. It was found that the efficiency of the reactors increased with decrease in size and increase in specific surface area of the support media. Total biomass accumulation in the reactors was also found to vary as a function of specific surface area and size of the support medium. The Stover-Kincannon kinetic model predicted satisfactorily the performance of the reactors. The maximum removal rate constant (U(max)) was 161.3, 99.0 and 77.5 g/l day and the saturation value constant (K(B)) was 162.0, 99.5 and 78.0 g/l day for S9, S30 and S40, respectively. Due to their higher biomass retention potential, the supports used in this study offer great promise as media in anaerobic fixed bed reactors. Anaerobic fixed-bed reactors with these supports can be applied as high-rate systems for the treatment of large volumes of wastewaters typically containing readily biodegradable organics, such as the winery wastewater.

  2. Stresses in Implant-Supported Fixed Complete Dentures with Different Screw-Tightening Sequences and Torque Application Modes.

    PubMed

    Barcellos, Leonardo H; Palmeiro, Marina Lobato; Naconecy, Marcos M; Geremia, Tomás; Cervieri, André; Shinkai, Rosemary S

    2018-05-17

    To compare the effects of different screw-tightening sequences and torque applications on stresses in implant-supported fixed complete dentures supported by five abutments. Strain gauges fixed to the abutments were used to test the sequences 2-4-3-1-5; 1-2-3-4-5; 3-2-4-1-5; and 2-5-4-1-3 with direct 10-Ncm torque or progressive torque (5 + 10 Ncm). Data were analyzed using analysis of variance and standardized effect size. No effects of tightening sequence or torque application were found except for the sequence 3-2-4-1-5 and some small to moderate effect sizes. Screw-tightening sequences and torque application modes have only a marginal effect on residual stresses.

  3. Buyer-vendor coordination for fixed lifetime product with quantity discount under finite production rate

    NASA Astrophysics Data System (ADS)

    Zhang, Qinghong; Luo, Jianwen; Duan, Yongrui

    2016-03-01

    Buyer-vendor coordination has been widely addressed; however, the fixed lifetime of the product is seldom considered. In this paper, we study the coordination of an integrated production-inventory system with quantity discount for a fixed lifetime product under finite production rate and deterministic demand. We first derive the buyer's ordering policy and the vendor's production batch size in decentralised and centralised systems. We then compare the two systems and show the non-coordination of the ordering policies and the production batch sizes. To improve the supply chain efficiency, we propose quantity discount contract and prove that the contract can coordinate the buyer-vendor supply chain. Finally, we present analytically tractable solutions and give a numerical example to illustrate the benefits of the proposed quantity discount strategy.

  4. Financial Management of a Large Multi-site Randomized Clinical Trial

    PubMed Central

    Sheffet, Alice J.; Flaxman, Linda; Tom, MeeLee; Hughes, Susan E.; Longbottom, Mary E.; Howard, Virginia J.; Marler, John R.; Brott, Thomas G.

    2014-01-01

    Background The Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) received five years’ funding ($21,112,866) from the National Institutes of Health to compare carotid stenting to surgery for stroke prevention in 2,500 randomized participants at 40 sites. Aims Herein we evaluate the change in the CREST budget from a fixed to variable-cost model and recommend strategies for the financial management of large-scale clinical trials. Methods Projections of the original grant’s fixed-cost model were compared to the actual costs of the revised variable-cost model. The original grant’s fixed-cost budget included salaries, fringe benefits, and other direct and indirect costs. For the variable-cost model, the costs were actual payments to the clinical sites and core centers based upon actual trial enrollment. We compared annual direct and indirect costs and per-patient cost for both the fixed and variable models. Differences between clinical site and core center expenditures were also calculated. Results Using a variable-cost budget for clinical sites, funding was extended by no-cost extension from five to eight years. Randomizing sites tripled from 34 to 109. Of the 2,500 targeted sample size, 138 (5.5%) were randomized during the first five years and 1,387 (55.5%) during the no-cost extension. The actual per-patient costs of the variable model were 9% ($13,845) of the projected per-patient costs ($152,992) of the fixed model. Conclusions Performance-based budgets conserve funding, promote compliance, and allow for additional sites at modest additional cost. Costs of large-scale clinical trials can thus be reduced through effective management without compromising scientific integrity. PMID:24661748

  5. Financial management of a large multisite randomized clinical trial.

    PubMed

    Sheffet, Alice J; Flaxman, Linda; Tom, MeeLee; Hughes, Susan E; Longbottom, Mary E; Howard, Virginia J; Marler, John R; Brott, Thomas G

    2014-08-01

    The Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) received five years' funding ($21 112 866) from the National Institutes of Health to compare carotid stenting to surgery for stroke prevention in 2500 randomized participants at 40 sites. Herein we evaluate the change in the CREST budget from a fixed to variable-cost model and recommend strategies for the financial management of large-scale clinical trials. Projections of the original grant's fixed-cost model were compared to the actual costs of the revised variable-cost model. The original grant's fixed-cost budget included salaries, fringe benefits, and other direct and indirect costs. For the variable-cost model, the costs were actual payments to the clinical sites and core centers based upon actual trial enrollment. We compared annual direct and indirect costs and per-patient cost for both the fixed and variable models. Differences between clinical site and core center expenditures were also calculated. Using a variable-cost budget for clinical sites, funding was extended by no-cost extension from five to eight years. Randomizing sites tripled from 34 to 109. Of the 2500 targeted sample size, 138 (5·5%) were randomized during the first five years and 1387 (55·5%) during the no-cost extension. The actual per-patient costs of the variable model were 9% ($13 845) of the projected per-patient costs ($152 992) of the fixed model. Performance-based budgets conserve funding, promote compliance, and allow for additional sites at modest additional cost. Costs of large-scale clinical trials can thus be reduced through effective management without compromising scientific integrity. © 2014 The Authors. International Journal of Stroke © 2014 World Stroke Organization.

  6. JPRS Report, China.

    DTIC Science & Technology

    1991-11-19

    grew 253 percent, net assets grew 87 vigorous debates among economists a few years ago, has percent, fixed assets grew 155 percent, and average been...although enterprises. they only account for 2.7 percent of all industrial enter- prises, they possess two-thirds of all fixed assess, account If we are to...large- ther fiscal problems are handled on an ad-hoc basis. A and medium-sized enterprises do not appear strong fixed base number in contracts sets taxes

  7. Complement-fixing Activity of Fulvic Acid from Shilajit and Other Natural Sources

    PubMed Central

    Schepetkin, Igor A.; Xie, Gang; Jutila, Mark A.; Quinn, Mark T.

    2008-01-01

    Shilajit has been used traditionally in folk medicine for treatment of a variety of disorders, including syndromes involving excessive complement activation. Extracts of Shilajit contain significant amounts of fulvic acid (FA), and it has been suggested that FA is responsible for many therapeutic properties of Shilajit. However, little is known regarding physical and chemical properties of Shilajit extracts, and nothing is known about their effects on the complement system. To address this issue, we fractionated extracts of commercial Shilajit using anion exchange and size-exclusion chromatography. One neutral (S-I) and two acidic (S-II and S-III) fractions were isolated, characterized, and compared with standardized FA samples. The most abundant fraction (S-II) was further fractionated into three sub-fractions (S-II-1 to S-II-3). The van Krevelen diagram showed that the Shilajit fractions are products of polysaccharide degradation, and all fractions, except S-II-3, contained type II arabinogalactan. All Shilajit fractions exhibited dose-dependent complement-fixing activity in vitro with high potency. Furthermore, we found a strong correlation between complement-fixing activity and carboxylic group content in the Shilajit fractions and other FA sources. These data provide a molecular basis to explain at least part of the beneficial therapeutic properties of Shilajit and other humic extracts. PMID:19107845

  8. A dataset describing brooding in three species of South African brittle stars, comprising seven high-resolution, micro X-ray computed tomography scans.

    PubMed

    Landschoff, Jannes; Du Plessis, Anton; Griffiths, Charles L

    2015-01-01

    Brooding brittle stars have a special mode of reproduction whereby they retain their eggs and juveniles inside respiratory body sacs called bursae. In the past, studying this phenomenon required disturbance of the sample by dissecting the adult. This caused irreversible damage and made the sample unsuitable for future studies. Micro X-ray computed tomography (μCT) is a promising technique, not only to visualise juveniles inside the bursae, but also to keep the sample intact and make the dataset of the scan available for future reference. Seven μCT scans of five freshly fixed (70 % ethanol) individuals, representing three differently sized brittle star species, provided adequate image quality to determine the numbers, sizes and postures of internally brooded young, as well as anatomy and morphology of adults. No staining agents were necessary to achieve high-resolution, high-contrast images, which permitted visualisations of both calcified and soft tissue. The raw data (projection and reconstruction images) are publicly available for download from GigaDB. Brittle stars of all sizes are suitable candidates for μCT imaging. This explicitly adds a new technique to the suite of tools available for studying the development of internally brooded young. The purpose of applying the technique was to visualise juveniles inside the adult, but because of the universally good quality of the dataset, the images can also be used for anatomical or comparative morphology-related studies of adult structures.

  9. State-space modeling of population sizes and trends in Nihoa Finch and Millerbird

    USGS Publications Warehouse

    Gorresen, P. Marcos; Brinck, Kevin W.; Camp, Richard J.; Farmer, Chris; Plentovich, Sheldon M.; Banko, Paul C.

    2016-01-01

    Both of the 2 passerines endemic to Nihoa Island, Hawai‘i, USA—the Nihoa Millerbird (Acrocephalus familiaris kingi) and Nihoa Finch (Telespiza ultima)—are listed as endangered by federal and state agencies. Their abundances have been estimated by irregularly implemented fixed-width strip-transect sampling from 1967 to 2012, from which area-based extrapolation of the raw counts produced highly variable abundance estimates for both species. To evaluate an alternative survey method and improve abundance estimates, we conducted variable-distance point-transect sampling between 2010 and 2014. We compared our results to those obtained from strip-transect samples. In addition, we applied state-space models to derive improved estimates of population size and trends from the legacy time series of strip-transect counts. Both species were fairly evenly distributed across Nihoa and occurred in all or nearly all available habitat. Population trends for Nihoa Millerbird were inconclusive because of high within-year variance. Trends for Nihoa Finch were positive, particularly since the early 1990s. Distance-based analysis of point-transect counts produced mean estimates of abundance similar to those from strip-transects but was generally more precise. However, both survey methods produced biologically unrealistic variability between years. State-space modeling of the long-term time series of abundances obtained from strip-transect counts effectively reduced uncertainty in both within- and between-year estimates of population size, and allowed short-term changes in abundance trajectories to be smoothed into a long-term trend.

  10. Simulating recurrent event data with hazard functions defined on a total time scale.

    PubMed

    Jahn-Eimermacher, Antje; Ingel, Katharina; Ozga, Ann-Kathrin; Preussler, Stella; Binder, Harald

    2015-03-08

    In medical studies with recurrent event data a total time scale perspective is often needed to adequately reflect disease mechanisms. This means that the hazard process is defined on the time since some starting point, e.g. the beginning of some disease, in contrast to a gap time scale where the hazard process restarts after each event. While techniques such as the Andersen-Gill model have been developed for analyzing data from a total time perspective, techniques for the simulation of such data, e.g. for sample size planning, have not been investigated so far. We have derived a simulation algorithm covering the Andersen-Gill model that can be used for sample size planning in clinical trials as well as the investigation of modeling techniques. Specifically, we allow for fixed and/or random covariates and an arbitrary hazard function defined on a total time scale. Furthermore we take into account that individuals may be temporarily insusceptible to a recurrent incidence of the event. The methods are based on conditional distributions of the inter-event times conditional on the total time of the preceeding event or study start. Closed form solutions are provided for common distributions. The derived methods have been implemented in a readily accessible R script. The proposed techniques are illustrated by planning the sample size for a clinical trial with complex recurrent event data. The required sample size is shown to be affected not only by censoring and intra-patient correlation, but also by the presence of risk-free intervals. This demonstrates the need for a simulation algorithm that particularly allows for complex study designs where no analytical sample size formulas might exist. The derived simulation algorithm is seen to be useful for the simulation of recurrent event data that follow an Andersen-Gill model. Next to the use of a total time scale, it allows for intra-patient correlation and risk-free intervals as are often observed in clinical trial data. Its application therefore allows the simulation of data that closely resemble real settings and thus can improve the use of simulation studies for designing and analysing studies.

  11. A prospective randomised comparative parallel study of amniotic membrane wound graft in the management of diabetic foot ulcers.

    PubMed

    Zelen, Charles M; Serena, Thomas E; Denoziere, Guilhem; Fetterolf, Donald E

    2013-10-01

    Our purpose was to compare healing characteristics of diabetic foot ulcers treated with dehydrated human amniotic membrane allografts (EpiFix®, MiMedx, Kennesaw, GA) versus standard of care. An IRB-approved, prospective, randomised, single-centre clinical trial was performed. Included were patients with a diabetic foot ulcer of at least 4-week duration without infection having adequate arterial perfusion. Patients were randomised to receive standard care alone or standard care with the addition of EpiFix. Wound size reduction and rates of complete healing after 4 and 6 weeks were evaluated. In the standard care group (n = 12) and the EpiFix group (n = 13) wounds reduced in size by a mean of 32.0% ± 47.3% versus 97.1% ± 7.0% (P < 0.001) after 4 weeks, whereas at 6 weeks wounds were reduced by -1.8% ± 70.3% versus 98.4% ± 5.8% (P < 0.001), standard care versus EpiFix, respectively. After 4 and 6 weeks of treatment the overall healing rate with application of EpiFix was shown to be 77% and 92%, respectively, whereas standard care healed 0% and 8% of the wounds (P < 0.001), respectively. Patients treated with EpiFix achieved superior healing rates over standard treatment alone. These results show that using EpiFix in addition to standard care is efficacious for wound healing. ©2013 The Authors. International Wound Journal published by John Wiley & Sons Ltd and Medicalhelplines.com Inc.

  12. Inverse size scaling of the nucleolus by a concentration-dependent phase transition.

    PubMed

    Weber, Stephanie C; Brangwynne, Clifford P

    2015-03-02

    Just as organ size typically increases with body size, the size of intracellular structures changes as cells grow and divide. Indeed, many organelles, such as the nucleus [1, 2], mitochondria [3], mitotic spindle [4, 5], and centrosome [6], exhibit size scaling, a phenomenon in which organelle size depends linearly on cell size. However, the mechanisms of organelle size scaling remain unclear. Here, we show that the size of the nucleolus, a membraneless organelle important for cell-size homeostasis [7], is coupled to cell size by an intracellular phase transition. We find that nucleolar size directly scales with cell size in early C. elegans embryos. Surprisingly, however, when embryo size is altered, we observe inverse scaling: nucleolar size increases in small cells and decreases in large cells. We demonstrate that this seemingly contradictory result arises from maternal loading of a fixed number rather than a fixed concentration of nucleolar components, which condense into nucleoli only above a threshold concentration. Our results suggest that the physics of phase transitions can dictate whether an organelle assembles, and, if so, its size, providing a mechanistic link between organelle assembly and cell size. Since the nucleolus is known to play a key role in cell growth, this biophysical readout of cell size could provide a novel feedback mechanism for growth control. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Reprint of: Effects of cold deformation, electron irradiation and extrusion on deuterium desorption behavior in Zr-1%Nb alloy

    NASA Astrophysics Data System (ADS)

    Morozov, O.; Mats, O.; Mats, V.; Zhurba, V.; Khaimovich, P.

    2018-01-01

    The present article introduces the data of analysis of ranges of ion-implanted deuterium desorption from Zr-1% Nb alloy. The samples studied underwent plastic deformation, low temperature extrusion and electron irradiation. Plastic rolling of the samples at temperature ∼300 K resulted in plastic deformation with the degree of ε = 3.9 and the formation of nanostructural state with the average grain size of d = 61 nm. The high degree of defectiveness is shown in thermodesorption spectrum as an additional area of the deuterium desorption in the temperature ranges 650-850 K. The further processing of the sample (that had undergone plastic deformation by plastic rolling) with electron irradiation resulted in the reduction of the average grain size (58 nm) and an increase in borders concentration. As a result the amount of deuterium desorpted increased in the temperature ranges 650-900 K. In case of Zr-1% Nb samples deformed by extrusion the extension of desorption area is observed towards the temperature reduction down to 420 K. The formation of the phase state of deuterium solid solution in zirconium was not observed. The structural state behavior is a control factor in the process of deuterium thermodesorption spectrum structure formation with a fixed implanted deuterium dose (hydrogen diagnostics). It appears as additional temperature ranges of deuterium desorption depending on the type, character and defect content.

  14. Molecular Evidence for Species-Level Distinctions in Clouded Leopards

    PubMed Central

    Buckley-Beason, Valerie A.; Johnson, Warren E.; Nash, Willliam G.; Stanyon, Roscoe; Menninger, Joan C.; Driscoll, Carlos A.; Howard, JoGayle; Bush, Mitch; Page, John E.; Roelke, Melody E.; Stone, Gary; Martelli, Paolo P.; Wen, Ci; Ling, Lin; Duraisingam, Ratna K.; Lam, Phan V.

    2017-01-01

    Summary Among the 37 living species of Felidae, the clouded leopard (Neofelis nebulosa) is generally classified as a monotypic genus basal to the Panthera lineage of great cats [1–5]. This secretive, mid-sized (16–23 kg) carnivore, now severely endangered, is traditionally subdivided into four southeast Asian subspecies (Figure 1A) [4–8]. We used molecular genetic methods to re-evaluate subspecies partitions and to quantify patterns of population genetic variation among 109 clouded leopards of known geographic origin (Figure 1A, Tables S1 and S2 in the Supplemental Data available online). We found strong phylogeographic monophyly and large genetic distances between N. n. nebulosa (mainland) and N. n. diardi (Borneo; n = 3 individuals) with mtDNA (771 bp), nuclear DNA (3100 bp), and 51 microsatellite loci. Thirty-six fixed mitochondrial and nuclear nucleotide differences and 20 microsatellite loci with nonoverlapping allele-size ranges distinguished N. n. nebulosa from N. n. diardi. Along with fixed subspecies-specific chromosomal differences, this degree of differentiation is equivalent to, or greater than, comparable measures among five recognized Panthera species (lion, tiger, leopard, jaguar, and snow leopard). These distinctions increase the urgency of clouded leopard conservation efforts, and if affirmed by morphological analysis and wider sampling of N. n. diardi in Borneo and Sumatra, would support reclassification of N. n. diardi as a new species (Neofelis diardi). PMID:17141620

  15. Crosslinking effect of dialdehyde starch (DAS) on decellularized porcine aortas for tissue engineering.

    PubMed

    Wang, Xu; Gu, Zhipeng; Qin, Huanhuan; Li, Li; Yang, Xu; Yu, Xixun

    2015-08-01

    Biological tissue-derived biomaterials must be chemically modified to avoid immediate degradation and immune response before being implanted in human body to replace malfunctioning organs. DAS with active aldehyde groups was employed to replace glutaraldehyde (GA), a most common synthetic crosslinking reagent in clinical practice, to fix bioprostheses for lower cytotoxicity. The aim of this research was to evaluate fixation effect of DAS. The tensile strength, crosslinking stability, cytotoxicity especially the anti-calcification capability of DAS-fixed tissues were investigated. The tensile strength and resistance to enzymatic degradation of samples were increased after DAS fixation, the values maintained stably in D-Hanks solution for several days. Meanwhile, ultrastructure of samples preserved well and the anti-calcification capability of samples were improved, the amount of positive staining points in the whole visual field of 15% DAS-fixed samples was only 0.576 times to GA-fixed ones. Moreover, both unreacted DAS and its hydrolytic products were nontoxic in cytotoxicity study. The results demonstrated DAS might be an effective crosslinking reagent to fix biological tissue-derived biomaterials in tissue engineering. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. The effect of sampling rate on observed statistics in a correlated random walk

    PubMed Central

    Rosser, G.; Fletcher, A. G.; Maini, P. K.; Baker, R. E.

    2013-01-01

    Tracking the movement of individual cells or animals can provide important information about their motile behaviour, with key examples including migrating birds, foraging mammals and bacterial chemotaxis. In many experimental protocols, observations are recorded with a fixed sampling interval and the continuous underlying motion is approximated as a series of discrete steps. The size of the sampling interval significantly affects the tracking measurements, the statistics computed from observed trajectories, and the inferences drawn. Despite the widespread use of tracking data to investigate motile behaviour, many open questions remain about these effects. We use a correlated random walk model to study the variation with sampling interval of two key quantities of interest: apparent speed and angle change. Two variants of the model are considered, in which reorientations occur instantaneously and with a stationary pause, respectively. We employ stochastic simulations to study the effect of sampling on the distributions of apparent speeds and angle changes, and present novel mathematical analysis in the case of rapid sampling. Our investigation elucidates the complex nature of sampling effects for sampling intervals ranging over many orders of magnitude. Results show that inclusion of a stationary phase significantly alters the observed distributions of both quantities. PMID:23740484

  17. Reducing the extinction risk of stochastic populations via nondemographic noise

    NASA Astrophysics Data System (ADS)

    Be'er, Shay; Assaf, Michael

    2018-02-01

    We consider nondemographic noise in the form of uncertainty in the reaction step size and reveal a dramatic effect this noise may have on the stability of self-regulating populations. Employing the reaction scheme m A →k A but allowing, e.g., the product number k to be a priori unknown and sampled from a given distribution, we show that such nondemographic noise can greatly reduce the population's extinction risk compared to the fixed k case. Our analysis is tested against numerical simulations, and by using empirical data of different species, we argue that certain distributions may be more evolutionary beneficial than others.

  18. 46 CFR 108.437 - Pipe sizes and discharge rates for enclosed ventilation systems for rotating electrical equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Pipe sizes and discharge rates for enclosed ventilation... Systems Fixed Carbon Dioxide Fire Extinguishing Systems § 108.437 Pipe sizes and discharge rates for enclosed ventilation systems for rotating electrical equipment. (a) The minimum pipe size for the initial...

  19. Modeling chain folding in protein-constrained circular DNA.

    PubMed Central

    Martino, J A; Olson, W K

    1998-01-01

    An efficient method for sampling equilibrium configurations of DNA chains binding one or more DNA-bending proteins is presented. The technique is applied to obtain the tertiary structures of minimal bending energy for a selection of dinucleosomal minichromosomes that differ in degree of protein-DNA interaction, protein spacing along the DNA chain contour, and ring size. The protein-bound portions of the DNA chains are represented by tight, left-handed supercoils of fixed geometry. The protein-free regions are modeled individually as elastic rods. For each random spatial arrangement of the two nucleosomes assumed during a stochastic search for the global minimum, the paths of the flexible connecting DNA segments are determined through a numerical solution of the equations of equilibrium for torsionally relaxed elastic rods. The minimal energy forms reveal how protein binding and spacing and plasmid size differentially affect folding and offer new insights into experimental minichromosome systems. PMID:9591675

  20. Extreme Quantum Memory Advantage for Rare-Event Sampling

    NASA Astrophysics Data System (ADS)

    Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.

    2018-02-01

    We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  1. Complex Population Dynamics and the Coalescent Under Neutrality

    PubMed Central

    Volz, Erik M.

    2012-01-01

    Estimates of the coalescent effective population size Ne can be poorly correlated with the true population size. The relationship between Ne and the population size is sensitive to the way in which birth and death rates vary over time. The problem of inference is exacerbated when the mechanisms underlying population dynamics are complex and depend on many parameters. In instances where nonparametric estimators of Ne such as the skyline struggle to reproduce the correct demographic history, model-based estimators that can draw on prior information about population size and growth rates may be more efficient. A coalescent model is developed for a large class of populations such that the demographic history is described by a deterministic nonlinear dynamical system of arbitrary dimension. This class of demographic model differs from those typically used in population genetics. Birth and death rates are not fixed, and no assumptions are made regarding the fraction of the population sampled. Furthermore, the population may be structured in such a way that gene copies reproduce both within and across demes. For this large class of models, it is shown how to derive the rate of coalescence, as well as the likelihood of a gene genealogy with heterochronous sampling and labeled taxa, and how to simulate a coalescent tree conditional on a complex demographic history. This theoretical framework encapsulates many of the models used by ecologists and epidemiologists and should facilitate the integration of population genetics with the study of mathematical population dynamics. PMID:22042576

  2. Matrix Structure Evolution and Nanoreinforcement Distribution in Mechanically Milled and Spark Plasma Sintered Al-SiC Nanocomposites.

    PubMed

    Saheb, Nouari; Aliyu, Ismaila Kayode; Hassan, Syed Fida; Al-Aqeeli, Nasser

    2014-09-19

    Development of homogenous metal matrix nanocomposites with uniform distribution of nanoreinforcement, preserved matrix nanostructure features, and improved properties, was possible by means of innovative processing techniques. In this work, Al-SiC nanocomposites were synthesized by mechanical milling and consolidated through spark plasma sintering. Field Emission Scanning Electron Microscope (FE-SEM) with Energy Dispersive X-ray Spectroscopy (EDS) facility was used for the characterization of the extent of SiC particles' distribution in the mechanically milled powders and spark plasma sintered samples. The change of the matrix crystallite size and lattice strain during milling and sintering was followed through X-ray diffraction (XRD). The density and hardness of the developed materials were evaluated as function of SiC content at fixed sintering conditions using a densimeter and a digital microhardness tester, respectively. It was found that milling for 24 h led to uniform distribution of SiC nanoreinforcement, reduced particle size and crystallite size of the aluminum matrix, and increased lattice strain. The presence and amount of SiC reinforcement enhanced the milling effect. The uniform distribution of SiC achieved by mechanical milling was maintained in sintered samples. Sintering led to the increase in the crystallite size of the aluminum matrix; however, it remained less than 100 nm in the composite containing 10 wt.% SiC. Density and hardness of sintered nanocomposites were reported and compared with those published in the literature.

  3. Theoretical size distribution of fossil taxa: analysis of a null model.

    PubMed

    Reed, William J; Hughes, Barry D

    2007-03-22

    This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.

  4. Lossless Compression of Data into Fixed-Length Packets

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2009-01-01

    A computer program effects lossless compression of data samples from a one-dimensional source into fixed-length data packets. The software makes use of adaptive prediction: it exploits the data structure in such a way as to increase the efficiency of compression beyond that otherwise achievable. Adaptive linear filtering is used to predict each sample value based on past sample values. The difference between predicted and actual sample values is encoded using a Golomb code.

  5. 7 CFR 993.503 - Size category.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Size category. 993.503 Section 993.503 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... categories listed in § 993.515 and fixes the range or the limits of the various size counts. Effective Date...

  6. Recent Structural Evolution of Early-Type Galaxies: Size Growth from z = 1 to z = 0

    NASA Astrophysics Data System (ADS)

    van der Wel, Arjen; Holden, Bradford P.; Zirm, Andrew W.; Franx, Marijn; Rettura, Alessandro; Illingworth, Garth D.; Ford, Holland C.

    2008-11-01

    Strong size and internal density evolution of early-type galaxies between z ~ 2 and the present has been reported by several authors. Here we analyze samples of nearby and distant (z ~ 1) galaxies with dynamically measured masses in order to confirm the previous, model-dependent results and constrain the uncertainties that may play a role. Velocity dispersion (σ) measurements are taken from the literature for 50 morphologically selected 0.8 < z < 1.2 field and cluster early-type galaxies with typical masses Mdyn = 2 × 1011 M⊙. Sizes (Reff) are determined with Advanced Camera for Surveys imaging. We compare the distant sample with a large sample of nearby (0.04 < z < 0.08) early-type galaxies extracted from the Sloan Digital Sky Survey for which we determine sizes, masses, and densities in a consistent manner, using simulations to quantify systematic differences between the size measurements of nearby and distant galaxies. We find a highly significant difference between the σ - Reff distributions of the nearby and distant samples, regardless of sample selection effects. The implied evolution in Reff at fixed mass between z = 1 and the present is a factor of 1.97 +/- 0.15. This is in qualitative agreement with semianalytic models; however, the observed evolution is much faster than the predicted evolution. Our results reinforce and are quantitatively consistent with previous, photometric studies that found size evolution of up to a factor of 5 since z ~ 2. A combination of structural evolution of individual galaxies through the accretion of companions and the continuous formation of early-type galaxies through increasingly gas-poor mergers is one plausible explanation of the observations. Based on observations with the Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555, and observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under NASA contract 1407. Based on observations collected at the European Southern Observatory, Chile (169.A-0458). Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation.

  7. The Design of a Templated C++ Small Vector Class for Numerical Computing

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.

    2000-01-01

    We describe the design and implementation of a templated C++ class for vectors. The vector class is templated both for vector length and vector component type; the vector length is fixed at template instantiation time. The vector implementation is such that for a vector of N components of type T, the total number of bytes required by the vector is equal to N * size of (T), where size of is the built-in C operator. The property of having a size no bigger than that required by the components themselves is key in many numerical computing applications, where one may allocate very large arrays of small, fixed-length vectors. In addition to the design trade-offs motivating our fixed-length vector design choice, we review some of the C++ template features essential to an efficient, succinct implementation. In particular, we highlight some of the standard C++ features, such as partial template specialization, that are not supported by all compilers currently. This report provides an inventory listing the relevant support currently provided by some key compilers, as well as test code one can use to verify compiler capabilities.

  8. Products of random matrices from fixed trace and induced Ginibre ensembles

    NASA Astrophysics Data System (ADS)

    Akemann, Gernot; Cikovic, Milan

    2018-05-01

    We investigate the microcanonical version of the complex induced Ginibre ensemble, by introducing a fixed trace constraint for its second moment. Like for the canonical Ginibre ensemble, its complex eigenvalues can be interpreted as a two-dimensional Coulomb gas, which are now subject to a constraint and a modified, collective confining potential. Despite the lack of determinantal structure in this fixed trace ensemble, we compute all its density correlation functions at finite matrix size and compare to a fixed trace ensemble of normal matrices, representing a different Coulomb gas. Our main tool of investigation is the Laplace transform, that maps back the fixed trace to the induced Ginibre ensemble. Products of random matrices have been used to study the Lyapunov and stability exponents for chaotic dynamical systems, where the latter are based on the complex eigenvalues of the product matrix. Because little is known about the universality of the eigenvalue distribution of such product matrices, we then study the product of m induced Ginibre matrices with a fixed trace constraint—which are clearly non-Gaussian—and M  ‑  m such Ginibre matrices without constraint. Using an m-fold inverse Laplace transform, we obtain a concise result for the spectral density of such a mixed product matrix at finite matrix size, for arbitrary fixed m and M. Very recently local and global universality was proven by the authors and their coworker for a more general, single elliptic fixed trace ensemble in the bulk of the spectrum. Here, we argue that the spectral density of mixed products is in the same universality class as the product of M independent induced Ginibre ensembles.

  9. Comparative Evaluation of Marginal Accuracy of a Cast Fixed Partial Denture Compared to Soldered Fixed Partial Denture Made of Two Different Base Metal Alloys and Casting Techniques: An In vitro Study.

    PubMed

    Jei, J Brintha; Mohan, Jayashree

    2014-03-01

    The periodontal health of abutment teeth and the durability of fixed partial denture depends on the marginal adaptation of the prosthesis. Any discrepancy in the marginal area leads to dissolution of luting agent and plaque accumulation. This study was done with the aim of evaluating the accuracy of marginal fit of four unit crown and bridge made up of Ni-Cr and Cr-Co alloys under induction and centrifugal casting. They were compared to cast fixed partial denture (FPD) and soldered FPD. For the purpose of this study a metal model was fabricated. A total of 40 samples (4-unit crown and bridge) were prepared in which 20 Cr-Co samples and 20 Ni-Cr samples were fabricated. Within these 20 samples of each group 10 samples were prepared by induction casting technique and other 10 samples with centrifugal casting technique. The cast FPD samples obtained were seated on the model and the samples were then measured with travelling microscope having precision of 0.001 cm. Sectioning of samples was done between the two pontics and measurements were made, then the soldering was made with torch soldering unit. The marginal discrepancy of soldered samples was measured and all findings were statistically analysed. The results revealed minimal marginal discrepancy with Cr-Co samples when compared to Ni-Cr samples done under induction casting technique. When compared to cast FPD samples, the soldered group showed reduced marginal discrepancy.

  10. The Impact of Policies Influencing the Demography of Age-Structured Populations: Lessons from Academies of Sciences

    PubMed Central

    Riosmena, Fernando; Winkler-Dworak, Maria; Prskawetz, Alexia; Feichtinger, Gustav

    2013-01-01

    In this paper, we assess the role of policies aimed at regulating the number and age structure of elections on the size and age structure of five European Academies of Sciences. We show the recent pace of ageing and the degree of variation in policies across them and discuss the implications of different policies on the size and age structure of academies. We also illustrate the potential effect of different election regimes (fixed vs. linked) and age structures of election (younger vs. older) by contrasting the steady-state dynamics of different projections of Full Members in each academy into 2070 and measuring the size and age-compositional effect of changing a given policy relative to a status quo policy scenario. Our findings suggest that academies with linked intake (i.e., where the size of the academy below a certain age is fixed and the number of elections is set to the number of members becoming that age) may be a more efficient approach to curb growth without suffering any ageing trade-offs relative to the faster growth of academies electing a fixed number of members per year. We further discuss the implications of our results in the context of stable populations open to migration. PMID:23843677

  11. Influences on Cocaine Tolerance Assessed under a Multiple Conjunctive Schedule of Reinforcement

    ERIC Educational Resources Information Center

    Yoon, Jin Ho; Branch, Marc N.

    2009-01-01

    Under multiple schedules of reinforcement, previous research has generally observed tolerance to the rate-decreasing effects of cocaine that has been dependent on schedule-parameter size in the context of fixed-ratio (FR) schedules, but not under the context of fixed-interval (FI) schedules of reinforcement. The current experiment examined the…

  12. A prospective randomised comparative parallel study of amniotic membrane wound graft in the management of diabetic foot ulcers

    PubMed Central

    Zelen, Charles M; Serena, Thomas E; Denoziere, Guilhem; Fetterolf, Donald E

    2013-01-01

    Our purpose was to compare healing characteristics of diabetic foot ulcers treated with dehydrated human amniotic membrane allografts (EpiFix®, MiMedx, Kennesaw, GA) versus standard of care. An IRB-approved, prospective, randomised, single-centre clinical trial was performed. Included were patients with a diabetic foot ulcer of at least 4-week duration without infection having adequate arterial perfusion. Patients were randomised to receive standard care alone or standard care with the addition of EpiFix. Wound size reduction and rates of complete healing after 4 and 6 weeks were evaluated. In the standard care group (n = 12) and the EpiFix group (n = 13) wounds reduced in size by a mean of 32·0% ± 47·3% versus 97·1% ± 7·0% (P < 0·001) after 4 weeks, whereas at 6 weeks wounds were reduced by −1·8% ± 70·3% versus 98·4% ± 5·8% (P < 0·001), standard care versus EpiFix, respectively. After 4 and 6 weeks of treatment the overall healing rate with application of EpiFix was shown to be 77% and 92%, respectively, whereas standard care healed 0% and 8% of the wounds (P < 0·001), respectively. Patients treated with EpiFix achieved superior healing rates over standard treatment alone. These results show that using EpiFix in addition to standard care is efficacious for wound healing. PMID:23742102

  13. Constraining response output on conjunctive fixed-ratio 1 fixed-time reinforcement schedules: Effects on the postreinforcement pause.

    PubMed

    Lopez, F; Pereira, C

    1985-03-01

    Two experiments used response-restriction procedures in order to test the independence of the factors determining response rate and the factors determining the size of the postreinforcement pause on interval schedules. Responding was restricted by response-produced blackout or by retracting the lever. In Experiment 1 with a Conjunctive FR 1 FT schedule, the blackout procedure reduced the postreinforcement pause more than the lever-retraction procedure did, and both procedures produced shorter pauses than did the schedule without response restriction. In Experiment 2 the interreinforcement interval was also manipulated, and the size of the pause was an increasing function of the interreinforcement interval, but the rate of increase was lower than that produced by fixed interval schedules of comparable interval durations. The assumption of functional independence of the postreinforcement pause and terminal rate in fixed interval schedules is questioned since data suggest that pause reductions resulted from constraining variation in response number compared to equivalent periodic schedules in which response number was allowed to vary. Copyright © 1985. Published by Elsevier B.V.

  14. Reporting Point and Interval Estimates of Effect-Size for Planned Contrasts: Fixed within Effect Analyses of Variance

    ERIC Educational Resources Information Center

    Robey, Randall R.

    2004-01-01

    The purpose of this tutorial is threefold: (a) review the state of statistical science regarding effect-sizes, (b) illustrate the importance of effect-sizes for interpreting findings in all forms of research and particularly for results of clinical-outcome research, and (c) demonstrate just how easily a criterion on reporting effect-sizes in…

  15. The scaling relationship between baryonic mass and stellar disc size in morphologically late-type galaxies

    NASA Astrophysics Data System (ADS)

    Wu, Po-Feng

    2018-02-01

    Here I report the scaling relationship between the baryonic mass and scale-length of stellar discs for ∼1000 morphologically late-type galaxies. The baryonic mass-size relationship is a single power law R_\\ast ∝ M_b^{0.38} across ∼3 orders of magnitude in baryonic mass. The scatter in size at fixed baryonic mass is nearly constant and there are no outliers. The baryonic mass-size relationship provides a more fundamental description of the structure of the disc than the stellar mass-size relationship. The slope and the scatter of the stellar mass-size relationship can be understood in the context of the baryonic mass-size relationship. For gas-rich galaxies, the stars are no longer a good tracer for the baryons. High-baryonic-mass, gas-rich galaxies appear to be much larger at fixed stellar mass because most of the baryonic content is gas. The stellar mass-size relationship thus deviates from the power-law baryonic relationship, and the scatter increases at the low-stellar-mass end. These extremely gas-rich low-mass galaxies can be classified as ultra-diffuse galaxies based on the structure.

  16. The preparation of uranium-adsorbed silica particles as a reference material for the fission track analysis

    NASA Astrophysics Data System (ADS)

    Park, Y. J.; Lee, M. H.; Pyo, H. Y.; Kim, H. A.; Sohn, S. C.; Jee, K. Y.; Kim, W. H.

    2005-06-01

    Uranium-adsorbed silica particles were prepared as a reference material for the fission track analysis (FTA) of swipe samples. A modified instrumental setup for particle generation, based on a commercial vibrating orifice aerosol generator to produce various sizes of droplets from a SiO 2 solution, is described. The droplets were transferred into a weak acidic solution bath to produce spherical solid silica particles. The classification of the silica particles in the range from 5 to 20 μm was carried out by the gravitational sedimentation method. The size distribution and morphology of the classified silica particles were investigated by scanning electron microscopy. The physicochemical properties of the classified silica particles such as the surface area, pore size and pore volume were measured. After an adsorption of 5% 235U on the silica particles in a solution adjusted to pH 4.5, the uranium-adsorbed silica particles were calcined up to 950 °C in a furnace to fix the uranium strongly onto the silica particles. The various sizes of uranium-adsorbed silica particles were applied to the FTA for use as a reference material.

  17. Liposome retention in size exclusion chromatography

    PubMed Central

    Ruysschaert, Tristan; Marque, Audrey; Duteyrat, Jean-Luc; Lesieur, Sylviane; Winterhalter, Mathias; Fournier, Didier

    2005-01-01

    Background Size exclusion chromatography is the method of choice for separating free from liposome-encapsulated molecules. However, if the column is not presaturated with lipids this type of chromatography causes a significant loss of lipid material. To date, the mechanism of lipid retention is poorly understood. It has been speculated that lipid binds to the column material or the entire liposome is entrapped inside the void. Results Here we show that intact liposomes and their contents are retained in the exclusion gel. Retention depends on the pore size, the smaller the pores, the higher the retention. Retained liposomes are not tightly fixed to the beads and are slowly released from the gels upon direct or inverted eluent flow, long washing steps or column repacking. Further addition of free liposomes leads to the elution of part of the gel-trapped liposomes, showing that the retention is transitory. Trapping reversibility should be related to a mechanism of partitioning of the liposomes between the stationary phase, water-swelled polymeric gel, and the mobile aqueous phase. Conclusion Retention of liposomes by size exclusion gels is a dynamic and reversible process, which should be accounted for to control lipid loss and sample contamination during chromatography. PMID:15885140

  18. Dose-Response Analysis of RNA-Seq Profiles in Archival Formalin-Fixed Paraffin-Embedded (FFPE) Samples.

    EPA Science Inventory

    Use of archival resources has been limited to date by inconsistent methods for genomic profiling of degraded RNA from formalin-fixed paraffin-embedded (FFPE) samples. RNA-sequencing offers a promising way to address this problem. Here we evaluated transcriptomic dose responses us...

  19. Mining the archives: a cross-platform analysis of gene expression profiles in archival formalin-fixed paraffin-embedded (FFPE) tissue.

    EPA Science Inventory

    Formalin-fixed paraffin-embedded (FFPE) tissue samples represent a potentially invaluable resource for genomic research into the molecular basis of disease. However, use of FFPE samples in gene expression studies has been limited by technical challenges resulting from degradation...

  20. Computer-Based Oral Hygiene Instruction versus Verbal Method in Fixed Orthodontic Patients

    PubMed Central

    Moshkelgosha, V.; Mehrvarz, Sh.; Saki, M.; Golkari, A.

    2017-01-01

    Statement of Problem: Fixed orthodontic appliances in the oral cavity make tooth cleaning procedures more complicated. Objectives: This study aimed to compare the efficacy of computerized oral hygiene instruction with verbal technique among fixed orthodontic patients referred to the evening clinic of Orthodontics of Shiraz Dental School. Materials and Methods: A single-blind study was performed in Orthodontic Department of Shiraz, Islamic Republic of Iran, from January to May 2015 following the demonstrated exclusion and inclusion criteria. The sample size was considered 60 patients with 30 subjects in each group. Bleeding on probing and plaque indices and dental knowledge were assessed in the subjects to determine pre-intervention status. A questionnaire was designed for dental knowledge evaluation. The patients were randomly assigned into the computerized and verbal groups. Three weeks after the oral hygiene instruction, indices of bleeding on probing and plaque index and the dental knowledge were evaluated to investigate post-intervention outcome. The two groups were compared by chi-square and student t tests. The pre- and post-intervention scores in each group were compared using paired t-test. Results: In the computerized group, the mean score for plaque index and bleeding on probing index was significantly decreased while dental health knowledge was significantly increased after oral hygiene instruction, in contrast to the verbal group. Conclusions: Within the limitations of the current study, computerized oral hygiene instruction is proposed to be more effective in providing optimal oral health status compared to the conventional method in fixed orthodontic patients. PMID:28959765

  1. Ecogenomic sensor reveals controls on N2-fixing microorganisms in the North Pacific Ocean.

    PubMed

    Robidart, Julie C; Church, Matthew J; Ryan, John P; Ascani, François; Wilson, Samuel T; Bombar, Deniz; Marin, Roman; Richards, Kelvin J; Karl, David M; Scholin, Christopher A; Zehr, Jonathan P

    2014-06-01

    Nitrogen-fixing microorganisms (diazotrophs) are keystone species that reduce atmospheric dinitrogen (N2) gas to fixed nitrogen (N), thereby accounting for much of N-based new production annually in the oligotrophic North Pacific. However, current approaches to study N2 fixation provide relatively limited spatiotemporal sampling resolution; hence, little is known about the ecological controls on these microorganisms or the scales over which they change. In the present study, we used a drifting robotic gene sensor to obtain high-resolution data on the distributions and abundances of N2-fixing populations over small spatiotemporal scales. The resulting measurements demonstrate that concentrations of N2 fixers can be highly variable, changing in abundance by nearly three orders of magnitude in less than 2 days and 30 km. Concurrent shipboard measurements and long-term time-series sampling uncovered a striking and previously unrecognized correlation between phosphate, which is undergoing long-term change in the region, and N2-fixing cyanobacterial abundances. These results underscore the value of high-resolution sampling and its applications for modeling the effects of global change.

  2. Tailoring treatment of haemophilia B: accounting for the distribution and clearance of standard and extended half-life FIX concentrates.

    PubMed

    Iorio, Alfonso; Fischer, Kathelijn; Blanchette, Victor; Rangarajan, Savita; Young, Guy; Morfini, Massimo

    2017-06-02

    The prophylactic administration of factor IX (FIX) is considered the most effective treatment for haemophilia B. The inter-individual variability and complexity of the pharmacokinetics (PK) of FIX, and the rarity of the disease have hampered identification of an optimal treatment regimens. The recent introduction of extended half-life recombinant FIX molecules (EHL-rFIX), has prompted a thorough reassessment of the clinical efficacy, PK and pharmacodynamics of plasma-derived and recombinant FIX. First, using longer sampling times and multi-compartmental PK models has led to more precise (and favourable) PK for FIX than was appreciated in the past. Second, investigating the distribution of FIX in the body beyond the vascular space (which is implied by its complex kinetics) has opened a new research field on the role for extravascular FIX. Third, measuring plasma levels of EHL-rFIX has shown that different aPTT reagents have different accuracy in measuring different FIX molecules. How will this new knowledge reflect on clinical practice? Clinical decision making in haemophilia B requires some caution and expertise. First, comparisons between different FIX molecules must be assessed taking into consideration the comparability of the populations studied and the PK models used. Second, individual PK estimates must rely on multi-compartmental models, and would benefit from adopting a population PK approach. Optimal sampling times need to be adapted to the prolonged half-life of the new EHL FIX products. Finally, costs considerations may apply, which is beyond the scope of this manuscript but might be deeply connected with the PK considerations discussed in this communication.

  3. A benthic-macroinvertebrate index of biotic integrity and assessment of conditions in selected streams in Chester County, Pennsylvania, 1998-2009

    USGS Publications Warehouse

    Reif, Andrew G.

    2012-01-01

    The Stream Conditions of Chester County Biological Monitoring Network (Network) was established by the U.S. Geological Survey and the Chester County Water Resources Authority in 1969. Chester County encompasses 760 square miles in southeastern Pennsylvania and has a rapidly expanding population. Land-use change has occurred in response to this continual growth, as open space, agricultural lands, and wooded lands have been converted to residential and commercial lands. In 1998, the Network was modified to include 18 fixed-location sites and 9 flexible-location sites. Sites were sampled annually in the fall (October-November) during base-flow conditions for water chemistry, instream habitat, and benthic macroinvertebrates. A new set of 9 flexible-location sites was selected each year. From 1998 to 2009, 213 samples were collected from the 18 fixed-location sites and 107 samples were collected from the 84 flexible-location sites. Eighteen flexible-location sites were sampled more than once over the 12-year period; 66 sites were sampled only once. Benthic-macroinvertebrate data from samples collected during 1998-2009 were used to establish the Chester County Index of Biotic Integrity (CC-IBI). The CC-IBI was based on the methods and metrics outlined in the Pennsylvania Department of Environmental Protection's "A Benthic Index of Biotic Integrity for Wadeable Freestone Streams in Pennsylvania." The resulting CC-IBI consists of scores for benthic-macroinvertebrate samples collected from sites in the Network that related to reference conditions in Chester County. Mean CC-IBI scores for 18 fixed-location sites ranged from 37.21 to 88.92. Thirty-nine percent of the 213 samples collected at the 18 fixed-location sites had a CC-IBI score less than 50; 33 percent, 50 to 70; 28 percent, greater than 70. CC-IBI scores from the 107 flexible-location samples ranged from 23.48 to 99.96. Twenty-five percent of the 107 samples collected at the flexible-location sites had a CC-IBI score less than 50; 33 percent, 50 to 70; and 42 percent, greater than 70. Factors that were found to affect CC-IBI scores are nutrient concentrations, habitat conditions, and percent of wooded and urban land use. A positive relation was determined between mean CC-IBI scores and mean total habitat scores for the 18 fixed-location sites. CC-IBI scores were most strongly affected by stream bank vegetative protection, embeddedness, riparian zone width, and sediment deposition. The highest CC-IBI scores were associated with sites that had greater than 28 percent wooded-wetland-water land use, less than 5 percent urban land use, and no municipal wastewater discharges within 10 miles upstream from the sampling site. The lowest CC-IBI scores were associated with sites where urban land use was greater than 15 percent or a municipal wastewater discharge was within 10 miles upstream from the sampling reach. The Mann Kendall test for trends was used to determine trends in CC-IBI scores and concentrations of nitrate, orthophosphate, and chloride for the 18 fixed-location sites. A positive trend in CC-IBI was determined for six sites, and a negative trend was determined for one site. Positive trends in nitrate concentrations were determined for 4 of the 18 fixed-location sites, and a negative trend in orthophosphate concentrations was determined for 1 of the 18 fixed-location sites. Positive trends in chloride concentrations were determined for 16 of the 18 fixed-location sites.

  4. Suprathreshold contrast summation over area using drifting gratings.

    PubMed

    McDougall, Thomas J; Dickinson, J Edwin; Badcock, David R

    2018-04-01

    This study investigated contrast summation over area for moving targets applied to a fixed-size contrast pedestal-a technique originally developed by Meese and Summers (2007) to demonstrate strong spatial summation of contrast for static patterns at suprathreshold contrast levels. Target contrast increments (drifting gratings) were applied to either the entire 20% contrast pedestal (a full fixed-size drifting grating), or in the configuration of a checkerboard pattern in which the target increment was applied to every alternate check region. These checked stimuli are known as "Battenberg patterns" and the sizes of the checks were varied (within a fixed overall area), across conditions, to measure summation behavior. Results showed that sensitivity to an increment covering the full pedestal was significantly higher than that for the Battenberg patterns (areal summation). Two observers showed strong summation across all check sizes (0.71°-3.33°), and for two other observers the summation ratio dropped to levels consistent with probability summation once check size reached 2.00°. Therefore, areal summation with moving targets does operate at high contrast, and is subserved by relatively large receptive fields covering a square area extending up to at least 3.33° × 3.33° for some observers. Previous studies in which the spatial structure of the pedestal and target covaried were unable to demonstrate spatial summation, potentially due to increasing amounts of suppression from gain-control mechanisms which increases as pedestal size increases. This study shows that when this is controlled, by keeping the pedestal the same across all conditions, extensive summation can be demonstrated.

  5. Improved capacitance characteristics of electrospun ACFs by pore size control and vanadium catalyst.

    PubMed

    Im, Ji Sun; Woo, Sang-Wook; Jung, Min-Jung; Lee, Young-Seak

    2008-11-01

    Nano-sized carbon fibers were prepared by using electrospinning, and their electrochemical properties were investigated as a possible electrode material for use as an electric double-layer capacitor (EDLC). To improve the electrode capacitance of EDLC, we implemented a three-step optimization. First, metal catalyst was introduced into the carbon fibers due to the excellent conductivity of metal. Vanadium pentoxide was used because it could be converted to vanadium for improved conductivity as the pore structure develops during the carbonization step. Vanadium catalyst was well dispersed in the carbon fibers, improving the capacitance of the electrode. Second, pore-size development was manipulated to obtain small mesopore sizes ranging from 2 to 5 nm. Through chemical activation, carbon fibers with controlled pore sizes were prepared with a high specific surface and pore volume, and their pore structure was investigated by using a BET apparatus. Finally, polyacrylonitrile was used as a carbon precursor to enrich for nitrogen content in the final product because nitrogen is known to improve electrode capacitance. Ultimately, the electrospun activated carbon fibers containing vanadium show improved functionality in charge/discharge, cyclic voltammetry, and specific capacitance compared with other samples because of an optimal combination of vanadium, nitrogen, and fixed pore structures.

  6. Model of Tooth Morphogenesis Predicts Carabelli Cusp Expression, Size, and Symmetry in Humans

    PubMed Central

    Hunter, John P.; Guatelli-Steinberg, Debbie; Weston, Theresia C.; Durner, Ryan; Betsinger, Tracy K.

    2010-01-01

    Background The patterning cascade model of tooth morphogenesis accounts for shape development through the interaction of a small number of genes. In the model, gene expression both directs development and is controlled by the shape of developing teeth. Enamel knots (zones of nonproliferating epithelium) mark the future sites of cusps. In order to form, a new enamel knot must escape the inhibitory fields surrounding other enamel knots before crown components become spatially fixed as morphogenesis ceases. Because cusp location on a fully formed tooth reflects enamel knot placement and tooth size is limited by the cessation of morphogenesis, the model predicts that cusp expression varies with intercusp spacing relative to tooth size. Although previous studies in humans have supported the model's implications, here we directly test the model's predictions for the expression, size, and symmetry of Carabelli cusp, a variation present in many human populations. Methodology/Principal Findings In a dental cast sample of upper first molars (M1s) (187 rights, 189 lefts, and 185 antimeric pairs), we measured tooth area and intercusp distances with a Hirox digital microscope. We assessed Carabelli expression quantitatively as an area in a subsample and qualitatively using two typological schemes in the full sample. As predicted, low relative intercusp distance is associated with Carabelli expression in both right and left samples using either qualitative or quantitative measures. Furthermore, asymmetry in Carabelli area is associated with asymmetry in relative intercusp spacing. Conclusions/Significance These findings support the model's predictions for Carabelli cusp expression both across and within individuals. By comparing right-left pairs of the same individual, our data show that small variations in developmental timing or spacing of enamel knots can influence cusp pattern independently of genotype. Our findings suggest that during evolution new cusps may first appear as a result of small changes in the spacing of enamel knots relative to crown size. PMID:20689576

  7. Surrogate endpoints for overall survival in metastatic melanoma: a meta-analysis of randomised controlled trials

    PubMed Central

    Flaherty, Keith T; Hennig, Michael; Lee, Sandra J; Ascierto, Paolo A; Dummer, Reinhard; Eggermont, Alexander M M; Hauschild, Axel; Kefford, Richard; Kirkwood, John M; Long, Georgina V; Lorigan, Paul; Mackensen, Andreas; McArthur, Grant; O'Day, Steven; Patel, Poulam M; Robert, Caroline; Schadendorf, Dirk

    2015-01-01

    Summary Background Recent phase 3 trials have shown an overall survival benefit in metastatic melanoma. We aimed to assess whether progression-free survival (PFS) could be regarded as a reliable surrogate for overall survival through a meta-analysis of randomised trials. Methods We systematically reviewed randomised trials comparing treatment regimens in metastatic melanoma that included dacarbazine as the control arm, and which reported both PFS and overall survival with a standard hazard ratio (HR). We correlated HRs for overall survival and PFS, weighted by sample size or by precision of the HR estimate, assuming fixed and random effects. We did sensitivity analyses according to presence of crossover, trial size, and dacarbazine dose. Findings After screening 1649 reports and meeting abstracts published before Sept 8, 2013, we identified 12 eligible randomised trials that enrolled 4416 patients with metastatic melanoma. Irrespective of weighting strategy, we noted a strong correlation between the treatment effects for PFS and overall survival, which seemed independent of treatment type. Pearson correlation coefficients were 0.71 (95% CI 0.29–0.90) with a random-effects assumption, 0.85 (0.59–0.95) with a fixed-effects assumption, and 0.89 (0.68–0.97) with sample-size weighting. For nine trials without crossover, the correlation coefficient was 0.96 (0.81–0.99), which decreased to 0.93 (0.74–0.98) when two additional trials with less than 50% crossover were included. Inclusion of mature follow-up data after at least 50% crossover (in vemurafenib and dabrafenib phase 3 trials) weakened the PFS to overall survival correlation (0.55, 0.03–0.84). Inclusion of trials with no or little crossover with the random-effects assumption yielded a conservative statement of the PFS to overall survival correlation of 0.85 (0.51–0.96). Interpretation PFS can be regarded as a robust surrogate for overall survival in dacarbazine-controlled randomised trials of metastatic melanoma; we postulate that this association will hold as treatment standards evolve and are adopted as the control arm in future trials. Funding None. PMID:24485879

  8. Highly accurate adaptive TOF determination method for ultrasonic thickness measurement

    NASA Astrophysics Data System (ADS)

    Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing

    2018-04-01

    Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.

  9. Onsite Calibration of a Precision IPRT Based on Gallium and Gallium-Based Small-Size Eutectic Points

    NASA Astrophysics Data System (ADS)

    Sun, Jianping; Hao, Xiaopeng; Zeng, Fanchao; Zhang, Lin; Fang, Xinyun

    2017-04-01

    Onsite thermometer calibration with temperature scale transfer technology based on fixed points can effectively improve the level of industrial temperature measurement and calibration. The present work performs an onsite calibration of a precision industrial platinum resistance thermometer near room temperature. The calibration is based on a series of small-size eutectic points, including Ga-In (15.7°C), Ga-Sn (20.5°C), Ga-Zn (25.2°C), and a Ga fixed point (29.7°C), developed in a portable multi-point automatic realization apparatus. The temperature plateaus of the Ga-In, Ga-Sn, and Ga-Zn eutectic points and the Ga fixed point last for longer than 2 h, and their reproducibility was better than 5 mK. The device is suitable for calibrating non-detachable temperature sensors in advanced environmental laboratories and industrial fields.

  10. The MUSE-Wide survey: detection of a clustering signal from Lyman α emitters in the range 3 < z < 6

    NASA Astrophysics Data System (ADS)

    Diener, C.; Wisotzki, L.; Schmidt, K. B.; Herenz, E. C.; Urrutia, T.; Garel, T.; Kerutt, J.; Saust, R. L.; Bacon, R.; Cantalupo, S.; Contini, T.; Guiderdoni, B.; Marino, R. A.; Richard, J.; Schaye, J.; Soucail, G.; Weilbacher, P. M.

    2017-11-01

    We present a clustering analysis of a sample of 238 Ly α emitters at redshift 3 ≲ z ≲ 6 from the MUSE-Wide survey. This survey mosaics extragalactic legacy fields with 1h MUSE pointings to detect statistically relevant samples of emission line galaxies. We analysed the first year observations from MUSE-Wide making use of the clustering signal in the line-of-sight direction. This method relies on comparing pair-counts at close redshifts for a fixed transverse distance and thus exploits the full potential of the redshift range covered by our sample. A clear clustering signal with a correlation length of r0=2.9^{+1.0}_{-1.1} Mpc (comoving) is detected. Whilst this result is based on only about a quarter of the full survey size, it already shows the immense potential of MUSE for efficiently observing and studying the clustering of Ly α emitters.

  11. A high speed implementation of the random decrement algorithm

    NASA Technical Reports Server (NTRS)

    Kiraly, L. J.

    1982-01-01

    The algorithm is useful for measuring net system damping levels in stochastic processes and for the development of equivalent linearized system response models. The algorithm works by summing together all subrecords which occur after predefined threshold level is crossed. The random decrement signature is normally developed by scanning stored data and adding subrecords together. The high speed implementation of the random decrement algorithm exploits the digital character of sampled data and uses fixed record lengths of 2(n) samples to greatly speed up the process. The contributions to the random decrement signature of each data point was calculated only once and in the same sequence as the data were taken. A hardware implementation of the algorithm using random logic is diagrammed and the process is shown to be limited only by the record size and the threshold crossing frequency of the sampled data. With a hardware cycle time of 200 ns and 1024 point signature, a threshold crossing frequency of 5000 Hertz can be processed and a stably averaged signature presented in real time.

  12. A comparison of exact tests for trend with binary endpoints using Bartholomew's statistic.

    PubMed

    Consiglio, J D; Shan, G; Wilding, G E

    2014-01-01

    Tests for trend are important in a number of scientific fields when trends associated with binary variables are of interest. Implementing the standard Cochran-Armitage trend test requires an arbitrary choice of scores assigned to represent the grouping variable. Bartholomew proposed a test for qualitatively ordered samples using asymptotic critical values, but type I error control can be problematic in finite samples. To our knowledge, use of the exact probability distribution has not been explored, and we study its use in the present paper. Specifically we consider an approach based on conditioning on both sets of marginal totals and three unconditional approaches where only the marginal totals corresponding to the group sample sizes are treated as fixed. While slightly conservative, all four tests are guaranteed to have actual type I error rates below the nominal level. The unconditional tests are found to exhibit far less conservatism than the conditional test and thereby gain a power advantage.

  13. Two Different Views on the World Around Us: The World of Uniformity versus Diversity.

    PubMed

    Kwon, JaeHwan; Nayakankuppam, Dhananjay

    2016-01-01

    We propose that when individuals believe in fixed traits of personality (entity theorists), they are likely to expect a world of "uniformity." As such, they easily infer a population statistic from a small sample of data with confidence. In contrast, individuals who believe in malleable traits of personality (incremental theorists) are likely to presume a world of "diversity," such that they "hesitate" to infer a population statistic from a similarly sized sample. In four laboratory experiments, we found that compared to incremental theorists, entity theorists estimated a population mean from a sample with a greater level of confidence (Studies 1a and 1b), expected more homogeneity among the entities within a population (Study 2), and perceived an extreme value to be more indicative of an outlier (Study 3). These results suggest that individuals are likely to use their implicit self-theory orientations (entity theory versus incremental theory) to see a population in general as a constitution either of homogeneous or heterogeneous entities.

  14. Effects of formalin fixation on tissue optical properties of in-vitro brain samples

    NASA Astrophysics Data System (ADS)

    Anand, Suresh; Cicchi, Riccardo; Martelli, Fabrizio; Giordano, Flavio; Buccoliero, Anna Maria; Guerrini, Renzo; Pavone, Francesco S.

    2015-03-01

    Application of light spectroscopy based techniques for the detection of cancers have emerged as a promising approach for tumor diagnostics. In-vivo or freshly excised samples are normally used for point spectroscopic studies. However, ethical issues related to in-vivo studies, rapid decay of surgically excised tissues and sample availability puts a limitation on in-vivo and in-vitro studies. There has been a few studies reported on the application of formalin fixed samples with good discrimination capability. Usually formalin fixation is performed to prevent degradation of tissues after surgical resection. Fixing tissues in formalin prevents cell death by forming cross-linkages with proteins. Previous investigations have revealed that washing tissues fixed in formalin using phosphate buffered saline is known to reduce the effects of formalin during spectroscopic measurements. But this could not be the case with reflectance measurements. Hemoglobin is a principal absorbing medium in biological tissues in the visible range. Formalin fixation causes hemoglobin to seep out from red blood cells. Also, there could be alterations in the refractive index of tissues when fixed in formalin. In this study, we propose to investigate the changes in tissue optical properties between freshly excised and formalin fixed brain tissues. The results indicate a complete change in the spectral profile in the visible range where hemoglobin has its maximum absorption peaks. The characteristic bands of oxy-hemoglobin at 540, 580 nm and deoxy-hemoglobin at 555 nm disappear in the case of samples fixed in formalin. In addition, an increased spectral intensity was observed for the wavelengths greater than 650 nm where scattering phenomena are presumed to dominate.

  15. Designing clinical trials to test disease-modifying agents: application to the treatment trials of Alzheimer's disease.

    PubMed

    Xiong, Chengjie; van Belle, Gerald; Miller, J Philip; Morris, John C

    2011-02-01

    Therapeutic trials of disease-modifying agents on Alzheimer's disease (AD) require novel designs and analyses involving switch of treatments for at least a portion of subjects enrolled. Randomized start and randomized withdrawal designs are two examples of such designs. Crucial design parameters such as sample size and the time of treatment switch are important to understand in designing such clinical trials. The purpose of this article is to provide methods to determine sample sizes and time of treatment switch as well as optimum statistical tests of treatment efficacy for clinical trials of disease-modifying agents on AD. A general linear mixed effects model is proposed to test the disease-modifying efficacy of novel therapeutic agents on AD. This model links the longitudinal growth from both the placebo arm and the treatment arm at the time of treatment switch for these in the delayed treatment arm or early withdrawal arm and incorporates the potential correlation on the rate of cognitive change before and after the treatment switch. Sample sizes and the optimum time for treatment switch of such trials as well as optimum test statistic for the treatment efficacy are determined according to the model. Assuming an evenly spaced longitudinal design over a fixed duration, the optimum treatment switching time in a randomized start or a randomized withdrawal trial is half way through the trial. With the optimum test statistic for the treatment efficacy and over a wide spectrum of model parameters, the optimum sample size allocations are fairly close to the simplest design with a sample size ratio of 1:1:1 among the treatment arm, the delayed treatment or early withdrawal arm, and the placebo arm. The application of the proposed methodology to AD provides evidence that much larger sample sizes are required to adequately power disease-modifying trials when compared with those for symptomatic agents, even when the treatment switch time and efficacy test are optimally chosen. The proposed method assumes that the only and immediate effect of treatment switch is on the rate of cognitive change. Crucial design parameters for the clinical trials of disease-modifying agents on AD can be optimally chosen. Government and industry officials as well as academia researchers should consider the optimum use of the clinical trials design for disease-modifying agents on AD in their effort to search for the treatments with the potential to modify the underlying pathophysiology of AD.

  16. Theoretical size distribution of fossil taxa: analysis of a null model

    PubMed Central

    Reed, William J; Hughes, Barry D

    2007-01-01

    Background This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family. PMID:17376249

  17. Comparing line-intersect, fixed-area, and point relascope sampling for dead and downed coarse woody material in a managed northern hardwood forest

    Treesearch

    G. J. Jordan; M. J. Ducey; J. H. Gove

    2004-01-01

    We present the results of a timed field trial comparing the bias characteristics and relative sampling efficiency of line-intersect, fixed-area, and point relascope sampling for downed coarse woody material. Seven stands in a managed northern hardwood forest in New Hampshire were inventoried. Significant differences were found among estimates in some stands, indicating...

  18. Galaxy properties in clusters. II. Backsplash galaxies

    NASA Astrophysics Data System (ADS)

    Muriel, H.; Coenda, V.

    2014-04-01

    Aims: We explore the properties of galaxies on the outskirts of clusters and their dependence on recent dynamical history in order to understand the real impact that the cluster core has on the evolution of galaxies. Methods: We analyse the properties of more than 1000 galaxies brighter than M0.1r = - 19.6 on the outskirts of 90 clusters (1 < r/rvir < 2) in the redshift range 0.05 < z < 0.10. Using the line of sight velocity of galaxies relative to the cluster's mean, we selected low and high velocity subsamples. Theoretical predictions indicate that a significant fraction of the first subsample should be backsplash galaxies, that is, objects that have already orbited near the cluster centre. A significant proportion of the sample of high relative velocity (HV) galaxies seems to be composed of infalling objects. Results: Our results suggest that, at fixed stellar mass, late-type galaxies in the low-velocity (LV) sample are systematically older, redder, and have formed fewer stars during the last 3 Gyrs than galaxies in the HV sample. This result is consistent with models that assume that the central regions of clusters are effective in quenching the star formation by means of processes such as ram pressure stripping or strangulation. At fixed stellar mass, LV galaxies show some evidence of having higher surface brightness and smaller size than HV galaxies. These results are consistent with the scenario where galaxies that have orbited the central regions of clusters are more likely to suffer tidal effects, producing loss of mass as well as a re-distribution of matter towards more compact configurations. Finally, we found a higher fraction of ET galaxies in the LV sample, supporting the idea that the central region of clusters of galaxies may contribute to the transformation of morphological types towards earlier types.

  19. Noninferiority trial designs for odds ratios and risk differences.

    PubMed

    Hilton, Joan F

    2010-04-30

    This study presents constrained maximum likelihood derivations of the design parameters of noninferiority trials for binary outcomes with the margin defined on the odds ratio (ψ) or risk-difference (δ) scale. The derivations show that, for trials in which the group-specific response rates are equal under the point-alternative hypothesis, the common response rate, π(N), is a fixed design parameter whose value lies between the control and experimental rates hypothesized at the point-null, {π(C), π(E)}. We show that setting π(N) equal to the value of π(C) that holds under H(0) underestimates the overall sample size requirement. Given {π(C), ψ} or {π(C), δ} and the type I and II error rates, or algorithm finds clinically meaningful design values of π(N), and the corresponding minimum asymptotic sample size, N=n(E)+n(C), and optimal allocation ratio, γ=n(E)/n(C). We find that optimal allocations are increasingly imbalanced as ψ increases, with γ(ψ)<1 and γ(δ)≈1/γ(ψ), and that ranges of allocation ratios map to the minimum sample size. The latter characteristic allows trialists to consider trade-offs between optimal allocation at a smaller N and a preferred allocation at a larger N. For designs with relatively large margins (e.g. ψ>2.5), trial results that are presented on both scales will differ in power, with more power lost if the study is designed on the risk-difference scale and reported on the odds ratio scale than vice versa. 2010 John Wiley & Sons, Ltd.

  20. Effect of high-pressure homogenization preparation on mean globule size and large-diameter tail of oil-in-water injectable emulsions.

    PubMed

    Peng, Jie; Dong, Wu-Jun; Li, Ling; Xu, Jia-Ming; Jin, Du-Jia; Xia, Xue-Jun; Liu, Yu-Ling

    2015-12-01

    The effect of different high pressure homogenization energy input parameters on mean diameter droplet size (MDS) and droplets with > 5 μm of lipid injectable emulsions were evaluated. All emulsions were prepared at different water bath temperatures or at different rotation speeds and rotor-stator system times, and using different homogenization pressures and numbers of high-pressure system recirculations. The MDS and polydispersity index (PI) value of the emulsions were determined using the dynamic light scattering (DLS) method, and large-diameter tail assessments were performed using the light-obscuration/single particle optical sensing (LO/SPOS) method. Using 1000 bar homogenization pressure and seven recirculations, the energy input parameters related to the rotor-stator system will not have an effect on the final particle size results. When rotor-stator system energy input parameters are fixed, homogenization pressure and recirculation will affect mean particle size and large diameter droplet. Particle size will decrease with increasing homogenization pressure from 400 bar to 1300 bar when homogenization recirculation is fixed; when the homogenization pressure is fixed at 1000 bar, the particle size of both MDS and percent of fat droplets exceeding 5 μm (PFAT 5 ) will decrease with increasing homogenization recirculations, MDS dropped to 173 nm after five cycles and maintained this level, volume-weighted PFAT 5 will drop to 0.038% after three cycles, so the "plateau" of MDS will come up later than that of PFAT 5 , and the optimal particle size is produced when both of them remained at plateau. Excess homogenization recirculation such as nine times under the 1000 bar may lead to PFAT 5 increase to 0.060% rather than a decrease; therefore, the high-pressure homogenization procedure is the key factor affecting the particle size distribution of emulsions. Varying storage conditions (4-25°C) also influenced particle size, especially the PFAT 5 . Copyright © 2015. Published by Elsevier B.V.

  1. Evaluation of aerial survey methods for Dall's sheep

    USGS Publications Warehouse

    Udevitz, Mark S.; Shults, Brad S.; Adams, Layne G.; Kleckner, Christopher

    2006-01-01

    Most Dall's sheep (Ovis dalli dalli) population-monitoring efforts use intensive aerial surveys with no attempt to estimate variance or adjust for potential sightability bias. We used radiocollared sheep to assess factors that could affect sightability of Dall's sheep in standard fixed-wing and helicopter surveys and to evaluate feasibility of methods that might account for sightability bias. Work was conducted in conjunction with annual aerial surveys of Dall's sheep in the western Baird Mountains, Alaska, USA, in 2000–2003. Overall sightability was relatively high compared with other aerial wildlife surveys, with 88% of the available, marked sheep detected in our fixed-wing surveys. Total counts from helicopter surveys were not consistently larger than counts from fixed-wing surveys of the same units, and detection probabilities did not differ for the 2 aircraft types. Our results suggest that total counts from helicopter surveys cannot be used to obtain reliable estimates of detection probabilities for fixed-wing surveys. Groups containing radiocollared sheep often changed in size and composition before they could be observed by a second crew in units that were double-surveyed. Double-observer methods that require determination of which groups were detected by each observer will be infeasible unless survey procedures can be modified so that groups remain more stable between observations. Mean group sizes increased during our study period, and our logistic regression sightability model indicated that detection probabilities increased with group size. Mark–resight estimates of annual population sizes were similar to sightability-model estimates, and confidence intervals overlapped broadly. We recommend the sightability-model approach as the most effective and feasible of the alternatives we considered for monitoring Dall's sheep populations.

  2. Surface-water-quality assessment of the lower Kansas River basin, Kansas and Nebraska; project data November 1986 through April 1990

    USGS Publications Warehouse

    Fallon, J.D.; McChesney, J.A.

    1993-01-01

    Surface-water-quality data were collected from the lower Kansas River Basin in Kansas and Nebraska. The data are presented in 17 tables consisting of physical properties, concentrations of dissolved solids and major ions, dissolved and total nutrients, dissolved and total major metals and trace elements, radioactivity, organic carbon, pesticides and other synthetic-organic compounds, bacteria and chlorophyll-a, in water; particle-size distributions and concentrations of major metals and trace elements in suspended and streambed sediment; and concentrations of synthetic-organic compounds in streambed sediment. The data are grouped within each table by sampling sites, arranged in downstream order. Ninety-one sites were sampled in the study area. These sampling sites are classified in three, non-exclusive categories (fixed, synoptic, and miscellaneous sites) on the basis of sampling frequency and location. Sampling sites are presented on a plate and in 3 tables, cross-referenced by downstream order, alphabetical order, U.S. Geological Survey identification number, sampling-site classification category, and types of analyses performed at each site. The methods used to collect, analyze, and verify the accuracy of the data also are presented. (USGS)

  3. Metagenomic analysis of the soil microbial N-cycling community in response to increased N deposition in the alpine PNW

    NASA Astrophysics Data System (ADS)

    Simpson, A.; Zabowski, D.

    2016-12-01

    The effects of nitrogen (N) deposition, caused by increasing agricultural activity and increased fossil fuel usage in populated areas, is of great concern to managers of formerly pristine, N-limited environments such as the alpine. Increasingly available mineral N can cause changes in the soil microbial community, including downshifting naturally N-fixing microbial populations, and increasing nitrification (and soil acidification) with concomitant increases in nitrous oxide release. As part of a larger study to determine critical N loads for PNW alpine ecosystems, we used inorganic N fertilization to mimic increasing levels of N deposition at alpine sites at Mount Rainier, North Cascades, and Olympic National Parks. After 3 years of N application, we isolated DNA from soil samples taken from the rooting zones of two different species categories - lupine spp. and heather (evergreen shrub) spp. Amplicon-based libraries for genes for nitrogenase and ammonia monooxygenase were sequenced for each level of fertilization. We will present changes in diversity and size of the N-fixing and nitrifying microbial communities by increasing N application, site, and plant community.

  4. Enhancement of RNA from Formalin-Fixed Paraffin-Embedded (FFPE) Samples

    EPA Science Inventory

    Enhancement of RNA from Formalin-Fixed Paraffin-Embedded (FFPE) Samples Susan Hester1, Leah Wehmas1, Carole Yauk2, Marc Roy3, Mark M. Gosink3, Deidre D. Wilk4, Thomas Hill III5, Charles E. Wood11Office of Research and Development, US EPA, RTP, NC 27709, USA, 2Environmental Health...

  5. 76 FR 57677 - Defense Federal Acquisition Regulation Supplement; Increase the Use of Fixed-Price Incentive...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-16

    ... Under Secretary of Defense for Acquisition, Technology, & Logistics (USD(AT&L)), dated November 3, 2010... cost, share lines, and ceiling price. This regulation is not a ``one-size- fits-all'' mandate. However.../optimistic weighted average and ensure that their cost curves do not mirror cost-plus-fixed-fee cost curves...

  6. Complement-fixing activity of fulvic acid from Shilajit and other natural sources.

    PubMed

    Schepetkin, Igor A; Xie, Gang; Jutila, Mark A; Quinn, Mark T

    2009-03-01

    Shilajit has been used traditionally in folk medicine for the treatment of a variety of disorders, including syndromes involving excessive complement activation. Extracts of Shilajit contain significant amounts of fulvic acid (FA), and it has been suggested that FA is responsible for many therapeutic properties of Shilajit. However, little is known regarding the physical and chemical properties of Shilajit extracts, and nothing is known about their effects on the complement system. To address this issue, extracts of commercial Shilajit were fractionated using anion exchange and size-exclusion chromatography. One neutral (S-I) and two acidic (S-II and S-III) fractions were isolated, characterized and compared with standardized FA samples. The most abundant fraction (S-II) was further fractionated into three sub-fractions (S-II-1 to S-II-3). The van Krevelen diagram showed that the Shilajit fractions are the products of polysaccharide degradation, and all fractions, except S-II-3, contained type II arabinogalactan. All Shilajit fractions exhibited dose-dependent complement-fixing activity in vitro with high potency. Furthermore, a strong correlation was found between the complement-fixing activity and carboxylic group content in the Shilajit fractions and other FA sources. These data provide a molecular basis to explain at least part of the beneficial therapeutic properties of Shilajit and other humic extracts. (c) 2008 John Wiley & Sons, Ltd.

  7. Estimating population abundance and mapping distribution of wintering sea ducks in coastal waters of the mid-Atlantic

    USGS Publications Warehouse

    Koneff, M.D.; Royle, J. Andrew; Forsell, D.J.; Wortham, J.S.; Boomer, G.S.; Perry, M.C.

    2005-01-01

    Survey design for wintering scoters (Melanitta sp.) and other sea ducks that occur in offshore waters is challenging because these species have large ranges, are subject to distributional shifts among years and within a season, and can occur in aggregations. Interest in winter sea duck population abundance surveys has grown in recent years. This interest stems from concern over the population status of some sea ducks, limitations of extant breeding waterfowl survey programs in North America and logistical challenges and costs of conducting surveys in northern breeding regions, high winter area philopatry in some species and potential conservation implications, and increasing concern over offshore development and other threats to sea duck wintering habitats. The efficiency and practicality of statistically-rigorous monitoring strategies for mobile, aggregated wintering sea duck populations have not been sufficiently investigated. This study evaluated a 2-phase adaptive stratified strip transect sampling plan to estimate wintering population size of scoters, long-tailed ducks (Clangua hyemalis), and other sea ducks and provide information on distribution. The sampling plan results in an optimal allocation of a fixed sampling effort among offshore strata in the U.S. mid-Atlantic coast region. Phase I transect selection probabilities were based on historic distribution and abundance data, while Phase 2 selection probabilities were based on observations made during Phase 1 flights. Distance sampling methods were used to estimate detection rates. Environmental variables thought to affect detection rates were recorded during the survey and post-stratification and covariate modeling were investigated to reduce the effect of heterogeneity on detection estimation. We assessed cost-precision tradeoffs under a number of fixed-cost sampling scenarios using Monte Carlo simulation. We discuss advantages and limitations of this sampling design for estimating wintering sea duck abundance and mapping distribution and suggest improvements for future surveys.

  8. Nickel and chromium levels in the saliva of a Saudi sample treated with fixed orthodontic appliances.

    PubMed

    Talic, Nabeel F; Alnahwi, Hasan H; Al-Faraj, Ali S

    2013-10-01

    The aim of this study was to measure the amount of nickel (Ni) and chromium (Cr) released into the saliva of Saudi patients treated with fixed orthodontic appliances. Ninety salivary samples were collected in a cross-sectional manner. Forty samples were collected from patients (17 males, 23 females) with fixed orthodontic appliances after different periods of orthodontic treatment ranging from the first month and up to 32 months into treatment. The fixed orthodontic appliance consisted of 4 bands, 20 stainless steel brackets, and upper and lower nickel titanium or stainless-steel arch wires. The other 50 samples were collected from people without appliances (24 males, 26 females). Samples were analyzed using Inductive Coupled Plasma/Mass Spectrometry and Inductively Coupled Plasma Optical Emission Spectroscopy to measure Ni and Cr levels, respectively. Student's t-test was used to compare Ni and Cr levels in the treated and untreated control groups. The mean Ni level was 4.197 μg/L in the experimental group and 2.3 μg/L in the control group (p < 0.05). The mean Cr level was 2.9 μg/L in the experimental group and 3.3 μg/L in the control group (p < 0.05). Fixed orthodontic appliances resulted in a non-toxic increase in salivary levels of Ni, but no change in Cr levels. Duration of orthodontic treatment did not affect Ni and Cr levels in the saliva.

  9. Selection of floating-point or fixed-point for adaptive noise canceller in somatosensory evoked potential measurement.

    PubMed

    Shen, Chongfei; Liu, Hongtao; Xie, Xb; Luk, Keith Dk; Hu, Yong

    2007-01-01

    Adaptive noise canceller (ANC) has been used to improve signal to noise ratio (SNR) of somsatosensory evoked potential (SEP). In order to efficiently apply the ANC in hardware system, fixed-point algorithm based ANC can achieve fast, cost-efficient construction, and low-power consumption in FPGA design. However, it is still questionable whether the SNR improvement performance by fixed-point algorithm is as good as that by floating-point algorithm. This study is to compare the outputs of ANC by floating-point and fixed-point algorithm ANC when it was applied to SEP signals. The selection of step-size parameter (micro) was found different in fixed-point algorithm from floating-point algorithm. In this simulation study, the outputs of fixed-point ANC showed higher distortion from real SEP signals than that of floating-point ANC. However, the difference would be decreased with increasing micro value. In the optimal selection of micro, fixed-point ANC can get as good results as floating-point algorithm.

  10. Confocal multispot microscope for fast and deep imaging in semicleared tissues

    NASA Astrophysics Data System (ADS)

    Adam, Marie-Pierre; Müllenbroich, Marie Caroline; Di Giovanna, Antonino Paolo; Alfieri, Domenico; Silvestri, Ludovico; Sacconi, Leonardo; Pavone, Francesco Saverio

    2018-02-01

    Although perfectly transparent specimens are imaged faster with light-sheet microscopy, less transparent samples are often imaged with two-photon microscopy leveraging its robustness to scattering; however, at the price of increased acquisition times. Clearing methods that are capable of rendering strongly scattering samples such as brain tissue perfectly transparent specimens are often complex, costly, and time intensive, even though for many applications a slightly lower level of tissue transparency is sufficient and easily achieved with simpler and faster methods. Here, we present a microscope type that has been geared toward the imaging of semicleared tissue by combining multispot two-photon excitation with rolling shutter wide-field detection to image deep and fast inside semicleared mouse brain. We present a theoretical and experimental evaluation of the point spread function and contrast as a function of shutter size. Finally, we demonstrate microscope performance in fixed brain slices by imaging dendritic spines up to 400-μm deep.

  11. Evaluation of preparation methods for suspended nano-objects on substrates for dimensional measurements by atomic force microscopy

    PubMed Central

    Göhler, Daniel; Wessely, Benno; Stintz, Michael; Lazzerini, Giovanni Mattia; Yacoot, Andrew

    2017-01-01

    Dimensional measurements on nano-objects by atomic force microscopy (AFM) require samples of safely fixed and well individualized particles with a suitable surface-specific particle number on flat and clean substrates. Several known and proven particle preparation methods, i.e., membrane filtration, drying, rinsing, dip coating as well as electrostatic and thermal precipitation, were performed by means of scanning electron microscopy to examine their suitability for preparing samples for dimensional AFM measurements. Different suspensions of nano-objects (with varying material, size and shape) stabilized in aqueous solutions were prepared therefore on different flat substrates. The drop-drying method was found to be the most suitable one for the analysed suspensions, because it does not require expensive dedicated equipment and led to a uniform local distribution of individualized nano-objects. Traceable AFM measurements based on Si and SiO2 coated substrates confirmed the suitability of this technique. PMID:28904839

  12. Evaluation of preparation methods for suspended nano-objects on substrates for dimensional measurements by atomic force microscopy.

    PubMed

    Fiala, Petra; Göhler, Daniel; Wessely, Benno; Stintz, Michael; Lazzerini, Giovanni Mattia; Yacoot, Andrew

    2017-01-01

    Dimensional measurements on nano-objects by atomic force microscopy (AFM) require samples of safely fixed and well individualized particles with a suitable surface-specific particle number on flat and clean substrates. Several known and proven particle preparation methods, i.e., membrane filtration, drying, rinsing, dip coating as well as electrostatic and thermal precipitation, were performed by means of scanning electron microscopy to examine their suitability for preparing samples for dimensional AFM measurements. Different suspensions of nano-objects (with varying material, size and shape) stabilized in aqueous solutions were prepared therefore on different flat substrates. The drop-drying method was found to be the most suitable one for the analysed suspensions, because it does not require expensive dedicated equipment and led to a uniform local distribution of individualized nano-objects. Traceable AFM measurements based on Si and SiO 2 coated substrates confirmed the suitability of this technique.

  13. Evaluation of Microstructure and Toughness of AISI D2 Steel by Bright Hardening in Comparison with Oil Quenching

    NASA Astrophysics Data System (ADS)

    Torkamani, H.; Raygan, Sh.; Rassizadehghani, J.

    2011-12-01

    AISI D2 is used widely in the manufacture of blanking and cold-forming dies, on account of its excellent hardness and wear behavior. Increasing toughness at a fixed high level of hardness is growing requirement for this kind of tool steel. Improving microstructure characteristics, especially refinement of coarse carbides, is an appropriate way to meet such requirement. In this study, morphology and size of carbides in martensite matrix were compared between two kinds of samples, which were bright hardened (quenching in hot alkaline salt bath consisting of 60% KOH and 40% NaOH) at 230 °C and quenched in oil bath at 60 °C. Results showed that morphology and distribution of carbides in samples performed by bright hardening were finer and almost spherical compared to that of oil quenched. This microstructure resulted in an improvement in toughness and tensile properties of alloy.

  14. 46 CFR 108.437 - Pipe sizes and discharge rates for enclosed ventilation systems for rotating electrical equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Pipe sizes and discharge rates for enclosed ventilation systems for rotating electrical equipment. 108.437 Section 108.437 Shipping COAST GUARD, DEPARTMENT OF... Systems Fixed Carbon Dioxide Fire Extinguishing Systems § 108.437 Pipe sizes and discharge rates for...

  15. Growth and sporulation of Bacillus subtilis under microgravity (7-IML-1)

    NASA Technical Reports Server (NTRS)

    Mennigmann, Horst-Dieter

    1992-01-01

    The experiment was aimed at measuring the growth and sporulation of Bacillus subtilis under microgravity. The hardware for the experiment consists of a culture chamber (15 ml) made from titanium and closed by a membrane permeable for gases but not for water. Two variants of this basic structure were built which fit into the standard Biorack container types 1 and 2 respectively. Growth of the bacteria will be monitored by continuously measuring the optical density with a built-in miniaturized photometer. Other parameters (viability, sporulation, fine structure, size distribution of cells and spores, growth kinetics, etc.) will be measured on the fixed samples and on those where metabolism was temporarily halted, respectively.

  16. The Fourier Imaging X-ray Spectrometer (FIXS) for the Argentinian, Scout-launched satelite de Aplicaciones Cienficas-1 (SAC-1)

    NASA Technical Reports Server (NTRS)

    Dennis, Brian R.; Crannell, Carol JO; Desai, Upendra D.; Orwig, Larry E.; Kiplinger, Alan L.; Schwartz, Richard A.; Hurford, Gordon J.; Emslie, A. Gordon; Machado, Marcos; Wood, Kent

    1988-01-01

    The Fourier Imaging X-ray Spectrometer (FIXS) is one of four instruments on SAC-1, the Argentinian satellite being proposed for launch by NASA on a Scout rocket in 1992/3. The FIXS is designed to provide solar flare images at X-ray energies between 5 and 35 keV. Observations will be made on arcsecond size scales and subsecond time scales of the processes that modify the electron spectrum and the thermal distribution in flaring magnetic structures.

  17. SSPARR: Development of an efficient autonomous sampling strategy

    NASA Astrophysics Data System (ADS)

    Chayes, D. N.

    2013-12-01

    The Seafloor Sounding in Polar and Remote Regions (SSPARR) effort was launched in 2004 with funding from the US National Science Foundation (Anderson et al. 2005.) Experiments with a prototype were encouraging (Greenspan et al., 2012, Chayes et al. 2012) and we are proceeding toward building and testing units for deployment during the 2014 season season in ice covered parts of the Arctic ocean. The simplest operational mode for a SSPARR buoy will be to wake and sample on a fixed time interval. A slightly more complex mode will check the distance traveled since the pervious sounding and potentially return to sleep-mode if it has not traveled far enough to make a significant new measurement. We are developing a mode that will use a sampling strategy based on querying an on-board copy of the best available digital terrain model (DTM) e.g. IBCAO in the Arctic, to help decide if it is appropriate to turn on the echo sounder and make a new measurement. We anticipate that a robust strategy of this type will allow a buoy to operate substantially longer on a fixed battery size. Anderson, R., D. Chayes, et al. (2005). "Seafloor Soundings in Polar and Remote Regions - A new instrument for unattended bathymetric observations," Eos Trans. AGU 86(18): Abstract C43A-10. Greenspan, D., D. Porter, et al. (2012). "IBuoy: Expendable Echo Sounder Buoy with Satellite Telemetry." EOS Fall Meeting Supplement C13E-0660. Chayes, D. N., S. A. Goemmer, et al. (2012). "SSPARR-3: A cost-effective autonomous drifting echosounder." EOS Fall Meeting supplement C13E-0659.

  18. Formaldehyde substitute fixatives: effects on nucleic acid preservation.

    PubMed

    Moelans, Cathy B; Oostenrijk, Daphne; Moons, Michiel J; van Diest, Paul J

    2011-11-01

    In surgical pathology, formalin-fixed paraffin-embedded tissues are increasingly being used as a source of DNA and RNA for molecular assays in addition to histopathological evaluation. However, the commonly used formalin fixative is carcinogenic, and its crosslinking impairs DNA and RNA quality. The suitability of three new presumably less toxic, crosslinking (F-Solv) and non-crosslinking (FineFIX, RCL2) alcohol-based fixatives was tested for routine molecular pathology in comparison with neutral buffered formalin (NBF) as gold standard. Size ladder PCR, epidermal growth factor receptor sequence analysis, microsatellite instability (MSI), chromogenic (CISH), fluorescence in situ hybridisation (FISH) and qPCR were performed. The alcohol-based non-crosslinking fixatives (FineFIX and RCL2) resulted in a higher DNA yield and quality compared with crosslinking fixatives (NBF and F-Solv). Size ladder PCR resulted in a shorter amplicon size (300 bp) for both crosslinking fixatives compared with the non-crosslinking fixatives (400 bp). All four fixatives were directly applicable for MSI and epidermal growth factor receptor sequence analysis. All fixatives except F-Solv showed clear signals in CISH and FISH. RNA yield and quality were superior after non-crosslinking fixation. qPCR resulted in lower Ct values for RCL2 and FineFIX. The alcohol-based non-crosslinking fixatives performed better than crosslinking fixatives with regard to DNA and RNA yield, quality and applicability in molecular diagnostics. Given the higher yield, less starting material may be necessary, thereby increasing the applicability of biopsies for molecular studies.

  19. Water ring-bouncing on repellent singularities.

    PubMed

    Chantelot, Pierre; Mazloomi Moqaddam, Ali; Gauthier, Anaïs; Chikatamarla, Shyam S; Clanet, Christophe; Karlin, Ilya V; Quéré, David

    2018-03-28

    Texturing a flat superhydrophobic substrate with point-like superhydrophobic macrotextures of the same repellency makes impacting water droplets take off as rings, which leads to shorter bouncing times than on a flat substrate. We investigate the contact time reduction on such elementary macrotextures through experiment and simulations. We understand the observations by decomposing the impacting drop reshaped by the defect into sub-units (or blobs) whose size is fixed by the liquid ring width. We test the blob picture by looking at the reduction of contact time for off-centered impacts and for impacts in grooves that produce liquid ribbons where the blob size is fixed by the width of the channel.

  20. Method for correcting imperfections on a surface

    DOEpatents

    Sweatt, William C.; Weed, John W.

    1999-09-07

    A process for producing near perfect optical surfaces. A previously polished optical surface is measured to determine its deviations from the desired perfect surface. A multi-aperture mask is designed based on this measurement and fabricated such that deposition through the mask will correct the deviations in the surface to an acceptable level. Various mask geometries can be used: variable individual aperture sizes using a fixed grid for the apertures or fixed aperture sizes using a variable aperture spacing. The imperfections are filled in using a vacuum deposition process with a very thin thickness of material such as silicon monoxide to produce an amorphous surface that bonds well to a glass substrate.

  1. Technique for fixing a temporalis muscle using a titanium plate to the implanted hydroxyapatite ceramics for bone defects.

    PubMed

    Ono, I; Tateshita, T; Sasaki, T; Matsumoto, M; Kodama, N

    2001-05-01

    We devised a technique to fix the temporalis muscle to the transplanted hydroxyapatite implant by using a titanium plate, which is fixed to the hydroxyapatite ceramic implant by screws and achieves good clinical results. The size, shape, and curvature of the hydroxyapatite ceramic implants were determined according to full-scale models fabricated using the laser lithographic modeling method from computed tomography data. A titanium plate was then fixed with screws on the implant before implantation, and then the temporalis muscle was refixed to the holes at both ends of the plate. The application of this technique reduced the hospitalization time and achieved good results esthetically.

  2. Effects of fusion relevant transient energetic radiation, plasma and thermal load on PLANSEE double forged tungsten samples in a low-energy plasma focus device

    NASA Astrophysics Data System (ADS)

    Javadi, S.; Ouyang, B.; Zhang, Z.; Ghoranneviss, M.; Salar Elahi, A.; Rawat, R. S.

    2018-06-01

    Tungsten is the leading candidate for plasma facing component (PFC) material for thermonuclear fusion reactors and various efforts are ongoing to evaluate its performance or response to intense fusion relevant radiation, plasma and thermal loads. This paper investigates the effects of hot dense decaying pinch plasma, highly energetic deuterium ions and fusion neutrons generated in a low-energy (3.0 kJ) plasma focus device on the structure, morphology and hardness of the PLANSEE double forged tungsten (W) samples surfaces. The tungsten samples were provided by Forschungszentrum Juelich (FZJ), Germany via International Atomic Energy Agency, Vienna, Austria. Tungsten samples were irradiated using different number of plasma focus (PF) shots (1, 5 and 10) at a fixed axial distance of 5 cm from the anode top and also at various distances from the top of the anode (5, 7, 9 and 11 cm) using fixed number (5) of plasma focus shots. The virgin tungsten sample had bcc structure (α-W phase). After PF irradiation, the XRD analysis showed (i) the presence of low intensity new diffraction peak corresponding to β-W phase at (211) crystalline plane indicating the partial structural phase transition in some of the samples, (ii) partial amorphization, and (iii) vacancy defects formation and compressive stress in irradiated tungsten samples. Field emission scanning electron microscopy showed the distinctive changes to non-uniform surface with nanometer sized particles and particle agglomerates along with large surface cracks at higher number of irradiation shots. X-ray photoelectron spectroscopy analysis demonstrated the reduction in relative tungsten oxide content and the increase in metallic tungsten after irradiation. Hardness of irradiated samples initially increased for one shot exposure due to reduction in tungsten oxide phase, but then decreased with increasing number of shots due to increasing concentration of defects. It is demonstrated that the plasma focus device provides appropriate intense fusion relevant pulses for testing the structural, morphological and mechanical changes on irradiated tungsten samples.

  3. A Kernel-based Lagrangian method for imperfectly-mixed chemical reactions

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael J.; Pankavich, Stephen; Benson, David A.

    2017-05-01

    Current Lagrangian (particle-tracking) algorithms used to simulate diffusion-reaction equations must employ a certain number of particles to properly emulate the system dynamics-particularly for imperfectly-mixed systems. The number of particles is tied to the statistics of the initial concentration fields of the system at hand. Systems with shorter-range correlation and/or smaller concentration variance require more particles, potentially limiting the computational feasibility of the method. For the well-known problem of bimolecular reaction, we show that using kernel-based, rather than Dirac delta, particles can significantly reduce the required number of particles. We derive the fixed width of a Gaussian kernel for a given reduced number of particles that analytically eliminates the error between kernel and Dirac solutions at any specified time. We also show how to solve for the fixed kernel size by minimizing the squared differences between solutions over any given time interval. Numerical results show that the width of the kernel should be kept below about 12% of the domain size, and that the analytic equations used to derive kernel width suffer significantly from the neglect of higher-order moments. The simulations with a kernel width given by least squares minimization perform better than those made to match at one specific time. A heuristic time-variable kernel size, based on the previous results, performs on par with the least squares fixed kernel size.

  4. On fixed-area plot sampling for downed coarse woody debris

    Treesearch

    Jeffrey H. Gove; Paul C. Van Deusen

    2011-01-01

    The use of fixed-area plots for sampling down coarse woody debris is reviewed. A set of clearly defined protocols for two previously described methods is established and a new method, which we call the 'sausage' method, is developed. All methods (protocols) are shown to be unbiased for volume estimation, but not necessarily for estimation of population...

  5. Synchrotron-based XAS on structure investigation of La0.99-xSrx(Na, K, Ba)0.01MnO3 nanoparticles: Evidence of magnetic properties

    NASA Astrophysics Data System (ADS)

    Daengsakul, Sujittra; Saengplot, Saowalak; Kidkhunthod, Pinit; Pimsawat, Adulphan; Maensiri, Santi

    2018-04-01

    This work presents the structural study of La0.99-xSrx(Na, K, Ba)0.01MnO3 or LSAM nanoparticles synthesized using thermal-hydro decomposition method where A denotes Na, K, Sr and Ba, respectively. The effect of ionic radii size of A dopants or rA from the substitution of A for La and Sr on the MnO6 octrahedral structure, where the average size of the cations occupying in A-site or 〈rA〉 is fixed at ∼ 1.24 Å, is focused. The LSAM nanoparticles are carefully studied using X-ray diffraction (XRD) including Rietveld refinement and X-ray Absorption Spectroscopy (XAS) including X-ray Absorption Near edge Structure (XANES) and X-ray Absorption Fine Structure (EXAFS). The Rietveld refinement shows all nano-powder samples have rhombohedral structure. By XANES technique we found that the effect of A substitutions at A-site causes a slight change of mean oxidation state of Mn between 3.54 and 3.60. Furthermore, the structural distortion of MnO6 octrahedral in samples is analysed and obtained from EXAFS. The observed trend of ferromagnetism for all LSAM samples can be clearly explained by evidences of A-site doping, structural distortion around Mn atoms and mixing Mn3+/Mn4+ valence states.

  6. --No Title--

    Science.gov Websites

    {box-sizing:border-box}.fix{background-color:#ff0}.bio-title{color:#5e6a71;font-size:20px;margin-top:0 ,.8);color:#fff;padding:1em;position:absolute;text-align:left}h3 .more{color:#fff;font-size:65%;font -weight:400}.hpfeat .header{background-color:#00a3e4;border-bottom:5px solid #000;color:#000;font-size

  7. Nonantibiotic prophylaxis for recurrent urinary tract infections: a systematic review and meta-analysis of randomized controlled trials.

    PubMed

    Beerepoot, M A J; Geerlings, S E; van Haarst, E P; van Charante, N Mensing; ter Riet, G

    2013-12-01

    Increasing antimicrobial resistance has stimulated interest in nonantibiotic prophylaxis of recurrent urinary tract infections. We assessed the effectiveness, tolerability and safety of nonantibiotic prophylaxis in adults with recurrent urinary tract infections. MEDLINE®, EMBASE™, the Cochrane Library and reference lists of relevant reviews were searched to April 2013 for relevant English language citations. Two reviewers selected randomized controlled trials that met the predefined criteria for population, interventions and outcomes. The difference in the proportions of patients with at least 1 urinary tract infection was calculated for individual studies, and pooled risk ratios were calculated using random and fixed effects models. Adverse event rates were also extracted. The Jadad score was used to assess risk of bias (0 to 2-high risk and 3 to 5-low risk). We identified 5,413 records and included 17 studies with data for 2,165 patients. The oral immunostimulant OM-89 decreased the rate of urinary tract infection recurrence (4 trials, sample size 891, median Jadad score 3, RR 0.61, 95% CI 0.48-0.78) and had a good safety profile. The vaginal vaccine Urovac® slightly reduced urinary tract infection recurrence (3 trials, sample size 220, Jadad score 3, RR 0.81, 95% CI 0.68-0.96) and primary immunization followed by booster immunization increased the time to reinfection. Vaginal estrogens showed a trend toward preventing urinary tract infection recurrence (2 trials, sample size 201, Jadad score 2.5, RR 0.42, 95% CI 0.16-1.10) but vaginal irritation occurred in 6% to 20% of women. Cranberries decreased urinary tract infection recurrence (2 trials, sample size 250, Jadad score 4, RR 0.53, 95% CI 0.33-0.83) as did acupuncture (2 open label trials, sample size 165, Jadad score 2, RR 0.48, 95% CI 0.29-0.79). Oral estrogens and lactobacilli prophylaxis did not decrease the rate of urinary tract infection recurrence. The evidence of the effectiveness of the oral immunostimulant OM-89 is promising. Although sometimes statistically significant, pooled findings for the other interventions should be considered tentative until corroborated by more research. Large head-to-head trials should be performed to optimally inform clinical decision making. Copyright © 2013 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  8. Investigating the effect of Cd-Mn co-doped nano-sized BiFeO3 on its physical properties

    NASA Astrophysics Data System (ADS)

    Ishaq, B.; Murtaza, G.; Sharif, S.; Azhar Khan, M.; Akhtar, Naeem; Will, I. G.; Saleem, Murtaza; Ramay, Shahid M.

    This work deals with the investigation of different effects on the structural, magnetic, electronic and dielectric properties of Cd and Mn doped Bi0.75Cd0.25Fe1-xMnxO3 multiferroic samples by taking fixed ratios of Cd and varying the Mn ratio with values of x = 0.0, 0.5, 0.10 and 0.15. Cd-Mn doped samples were synthesized chemically using a microemulsion method. All the samples were finally sintered at 700 °C for 2 h to obtain the single phase perovskites structure of BiFeO3 materials. The synthesized samples were characterized by different techniques, such as X-ray diffractometry (XRD), Scanning Electron Microscopy (SEM), Fourier transform infrared spectroscopy (FTIR), LCR meter and magnetic properties using VSM. XRD results confirm BFO is a perovskite structure having crystallite size in the range of 24-54 nm. XRD results also reveal observed structural distortion due to doping of Cd at the A-site and Mn at the B-site of BFO. SEM results depict that, as the substitution of Cd-Mn increases in BFO, grain size decreases up to 30 nm. FTIR spectra showed prominent absorption bands at 555 cm-1 and 445 cm-1 corresponding to the stretching vibrations of the metal ions complexes at site A and site B, respectively. Variation of dielectric constant (ɛ‧) and loss tangent (tan δ) at room temperature in the range of 1 MHz to 3 GHz have been investigated. Results reveal that with Cd-Mn co doping a slight decrease in dielectric constant have been observed. Magnetic properties of Cd-Mn doped pure BFO samples have been studied at 300 K. Results reveal that undoped BiFeO3 exhibits weak ferromagnetic ordering due to the canting of its spin. Increase in magnetization and decrease in coercivity is a clear indication that a material can be used in high density recording media and memory devices.

  9. Redshift evolution of the dynamical properties of massive galaxies from SDSS-III/BOSS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beifiori, Alessandra; Saglia, Roberto P.; Bender, Ralf

    2014-07-10

    We study the redshift evolution of the dynamical properties of ∼180, 000 massive galaxies from SDSS-III/BOSS combined with a local early-type galaxy sample from SDSS-II in the redshift range 0.1 ≤ z ≤ 0.6. The typical stellar mass of this sample is M{sub *} ∼2 × 10{sup 11} M{sub ☉}. We analyze the evolution of the galaxy parameters effective radius, stellar velocity dispersion, and the dynamical to stellar mass ratio with redshift. As the effective radii of BOSS galaxies at these redshifts are not well resolved in the Sloan Digital Sky Survey (SDSS) imaging we calibrate the SDSS size measurementsmore » with Hubble Space Telescope/COSMOS photometry for a sub-sample of galaxies. We further apply a correction for progenitor bias to build a sample which consists of a coeval, passively evolving population. Systematic errors due to size correction and the calculation of dynamical mass are assessed through Monte Carlo simulations. At fixed stellar or dynamical mass, we find moderate evolution in galaxy size and stellar velocity dispersion, in agreement with previous studies. We show that this results in a decrease of the dynamical to stellar mass ratio with redshift at >2σ significance. By combining our sample with high-redshift literature data, we find that this evolution of the dynamical to stellar mass ratio continues beyond z ∼ 0.7 up to z > 2 as M{sub dyn}/M{sub *} ∼(1 + z){sup –0.30±0.12}, further strengthening the evidence for an increase of M{sub dyn}/M{sub *} with cosmic time. This result is in line with recent predictions from galaxy formation simulations based on minor merger driven mass growth, in which the dark matter fraction within the half-light radius increases with cosmic time.« less

  10. Marginal adaptation of mineral trioxide aggregate (MTA) compared with amalgam as a root-end filling material: a low-vacuum (LV) versus high-vacuum (HV) SEM study.

    PubMed

    Shipper, G; Grossman, E S; Botha, A J; Cleaton-Jones, P E

    2004-05-01

    To compare the marginal adaptation of mineral trioxide aggregate (MTA) or amalgam root-end fillings in extracted teeth under low-vacuum (LV) versus high-vacuum (HV) scanning electron microscope (SEM) viewing conditions. Root-end fillings were placed in 20 extracted single-rooted maxillary teeth. Ten root ends were filled with MTA and the other 10 root ends were filled with amalgam. Two 1 mm thick transverse sections of each root-end filling were cut 0.50 mm (top) and 1.50 mm (bottom) from the apex. Gap size was recorded at eight fixed points along the dentine-filling material interface on each section when uncoated wet (LV wet (LVW)) and dry under LV (0.3 Torr) in a JEOL JSM-5800 SEM and backscatter emission (LV dry uncoated (LVDU)). The sections were then air-dried, gold-coated and gap size was recorded once again at the fixed points under HV (10(-6) Torr; HV dry coated (HVDC)). Specimen cracking, and the size and extent of the crack were noted. Gap sizes at fixed points were smallest under LVW and largest under HVDC SEM conditions. Gaps were smallest in MTA root-end fillings. A General Linear Models Analysis, with gap size as the dependent variable, showed significant effects for extent of crack in dentine, material and viewing condition (P = 0.0001). This study showed that MTA produced a superior marginal adaptation to amalgam, and that LVW conditions showed the lowest gap size. Gap size was influenced by the method of SEM viewing. If only HV SEM viewing conditions are used for MTA and amalgam root-end fillings, a correction factor of 3.5 and 2.2, respectively, may be used to enable relative comparisons of gap size to LVW conditions.

  11. Assessing the use of existing data to compare plains fish assemblages collected from random and fixed sites in Colorado

    USGS Publications Warehouse

    Zuellig, Robert E.; Crockett, Harry J.

    2013-01-01

    The U.S. Geological Survey, in cooperation with Colorado Parks and Wildlife, assessed the potential use of combining recently (2007 to 2010) and formerly (1992 to 1996) collected data to compare plains fish assemblages sampled from random and fixed sites located in the South Platte and Arkansas River Basins in Colorado. The first step was to determine if fish assemblages collected between 1992 and 1996 were comparable to samples collected at the same sites between 2007 and 2010. If samples from the two time periods were comparable, then it was considered reasonable that the combined time-period data could be used to make comparisons between random and fixed sites. In contrast, if differences were found between the two time periods, then it was considered unreasonable to use these data to make comparisons between random and fixed sites. One-hundred samples collected during the 1990s and 2000s from 50 sites dispersed among 19 streams in both basins were compiled from a database maintained by Colorado Parks and Wildlife. Nonparametric multivariate two-way analysis of similarities was used to test for fish-assemblage differences between time periods while accounting for stream-to-stream differences. Results indicated relatively weak but significant time-period differences in fish assemblages. Weak time-period differences in this case possibly were related to changes in fish assemblages associated with environmental factors; however, it is difficult to separate other possible explanations such as limited replication of paired time-period samples in many of the streams or perhaps differences in sampling efficiency and effort between the time periods. Regardless, using the 1990s data to fill data gaps to compare random and fixed-site fish-assemblage data is ill advised based on the significant separation in fish assemblages between time periods and the inability to determine conclusive explanations for these results. These findings indicated that additional sampling will be necessary before unbiased comparisons can be made between fish assemblages collected from random and fixed sites in the South Platte and Arkansas River Basins.

  12. Development of an optimized protocol for the detection of classical swine fever virus in formalin-fixed, paraffin-embedded tissues by seminested reverse transcription-polymerase chain reaction and comparison with in situ hybridization.

    PubMed

    Ha, S-K; Choi, C; Chae, C

    2004-10-01

    An optimized protocol was developed for the detection of classical swine fever virus (CSFV) in formalin-fixed, paraffin-embedded tissues obtained from experimentally and naturally infected pigs by seminested reverse transcription-polymerase chain reaction (RT-PCR). The results for seminested RT-PCR were compared with those determined by in situ hybridization. The results obtained show that the use of deparaffinization with xylene, digestion with proteinase K, extraction with Trizol LS, followed by seminested RT-PCR is a reliable detection method. An increase in sensitivity was observed as amplicon size decreased. The highest sensitivity for RT-PCR on formalin-fixed, paraffin-embedded tissues RNA was obtained with amplicon sizes less than approximately 200 base pairs. An hybridization signal for CSFV was detected in lymph nodes from 12 experimentally and 12 naturally infected pigs. The sensitivity of seminested RT-PCR compared with in situ hybridization was 100% for CSFV. When only formalin-fixed tissues are available, seminested RT-PCR and in situ hybridization would be useful diagnostic methods for the detection of CSFV nucleic acid.

  13. Stochastic oscillations in models of epidemics on a network of cities

    NASA Astrophysics Data System (ADS)

    Rozhnova, G.; Nunes, A.; McKane, A. J.

    2011-11-01

    We carry out an analytic investigation of stochastic oscillations in a susceptible-infected-recovered model of disease spread on a network of n cities. In the model a fraction fjk of individuals from city k commute to city j, where they may infect, or be infected by, others. Starting from a continuous-time Markov description of the model the deterministic equations, which are valid in the limit when the population of each city is infinite, are recovered. The stochastic fluctuations about the fixed point of these equations are derived by use of the van Kampen system-size expansion. The fixed point structure of the deterministic equations is remarkably simple: A unique nontrivial fixed point always exists and has the feature that the fraction of susceptible, infected, and recovered individuals is the same for each city irrespective of its size. We find that the stochastic fluctuations have an analogously simple dynamics: All oscillations have a single frequency, equal to that found in the one-city case. We interpret this phenomenon in terms of the properties of the spectrum of the matrix of the linear approximation of the deterministic equations at the fixed point.

  14. Integration of Microdialysis Sampling and Microchip Electrophoresis with Electrochemical Detection

    PubMed Central

    Mecker, Laura C.; Martin, R. Scott

    2009-01-01

    Here we describe the fabrication, optimization, and application of a microfluidic device that integrates microdialysis (MD) sampling, microchip electrophoresis (ME), and electrochemical detection (EC). The manner in which the chip is produced is reproducible and enables the fixed alignment of the MD/ME and ME/EC interfaces. Poly(dimethylsiloxane) (PDMS) -based valves were used for the discrete injection of sample from the hydrodynamic MD dialysate stream into a separation channel for analysis with ME. To enable the integration of ME with EC detection, a palladium decoupler was used to isolate the high voltages associated with electrophoresis from micron-sized carbon ink detection electrodes. Optimization of the ME/EC interface was needed to allow the use of biologically appropriate perfusate buffers containing high salt content. This optimization included changes in the fabrication procedure, increases in the decoupler surface area, and a programmed voltage shutoff. The ability of the MD/ME/EC system to sample a biological system was demonstrated by using a linear probe to monitor the stimulated release of dopamine from a confluent layer of PC 12 cells. To our knowledge, this is the first report of a microchip-based system that couples microdialysis sampling with microchip electrophoresis and electrochemical detection. PMID:19551945

  15. Visual Acuity Using Head-fixed Displays During Passive Self and Surround Motion

    NASA Technical Reports Server (NTRS)

    Wood, Scott J.; Black, F. Owen; Stallings, Valerie; Peters, Brian

    2007-01-01

    The ability to read head-fixed displays on various motion platforms requires the suppression of vestibulo-ocular reflexes. This study examined dynamic visual acuity while viewing a head-fixed display during different self and surround rotation conditions. Twelve healthy subjects were asked to report the orientation of Landolt C optotypes presented on a micro-display fixed to a rotating chair at 50 cm distance. Acuity thresholds were determined by the lowest size at which the subjects correctly identified 3 of 5 optotype orientations at peak velocity. Visual acuity was compared across four different conditions, each tested at 0.05 and 0.4 Hz (peak amplitude of 57 deg/s). The four conditions included: subject rotated in semi-darkness (i.e., limited to background illumination of the display), subject stationary while visual scene rotated, subject rotated around a stationary visual background, and both subject and visual scene rotated together. Visual acuity performance was greatest when the subject rotated around a stationary visual background; i.e., when both vestibular and visual inputs provided concordant information about the motion. Visual acuity performance was most reduced when the subject and visual scene rotated together; i.e., when the visual scene provided discordant information about the motion. Ranges of 4-5 logMAR step sizes across the conditions indicated the acuity task was sufficient to discriminate visual performance levels. The background visual scene can influence the ability to read head-fixed displays during passive motion disturbances. Dynamic visual acuity using head-fixed displays can provide an operationally relevant screening tool for visual performance during exposure to novel acceleration environments.

  16. Effects of Lugol's iodine solution and formalin on cell volume of three bloom-forming dinoflagellates

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Sun, Xiaoxia; Zhao, Yongfang

    2017-07-01

    Fixatives are traditionally used in marine ecosystem research. The bias introduced by fixatives on the dimensions of plankton cells may lead to an overestimation or underestimation of the carbon biomass. To determine the impact of traditional fixatives on dinoflagellates during short- and long-term fixation, we analyzed the degree of change in three bloom-forming dinoflagellates ( Prorocentrum micans, Scrippsiella trochoidea and Noctiluca scintillans) brought about by Lugol's iodine solution (hereafter Lugol's) and formalin. The fixation effects were species-specific. P. micans cell volume showed no significant change following long-term preservation, and S. trochoidea swelled by approximately 8.06% in Lugol's and by 20.97% in formalin as a percentage of the live cell volume, respectively. N. scintillans shrank significantly in both fixatives. The volume change due to formalin in N. scintillans was not concentration-dependent, whereas the volume shrinkage of N. scintillans cells fixed with Lugol's at a concentration of 2% was nearly six-fold that in cells fixed with Lugol's at a concentration of 0.6%-0.8%. To better estimate the volume of N. scintillans fixed in formalin at a concentration of 5%, we suggest that the conversion relationship was as follows: volume of live cell=volume of intact fixed cell/0.61. Apart from size change, damage induced by fixatives on N. scintillans was obvious. Lugol's is not a suitable fixative for N. scintillans due to high frequency of broken cells. Accurate carbon biomass estimate of N. scintillans should be performed on live samples. These findings help to improve the estimate of phytoplankton cell volume and carbon biomass in marine ecosystem.

  17. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR COLLECTION OF FIXED SITE INDOOR AND OUTDOOR FORMALDEHYDE PASSIVE SAMPLES (UA-F-13.1)

    EPA Science Inventory

    The purpose of this SOP is to describe the methods used to sample residential indoor and outdoor atmospheres for the presence of formaldehyde using the PF-1 passive formaldehyde sampler. The PF-1 passive sampler is used as a fixed location monitor to determine time integrated ex...

  18. Microfluidic Chips for In Situ Crystal X-ray Diffraction and In Situ Dynamic Light Scattering for Serial Crystallography.

    PubMed

    Gicquel, Yannig; Schubert, Robin; Kapis, Svetlana; Bourenkov, Gleb; Schneider, Thomas; Perbandt, Markus; Betzel, Christian; Chapman, Henry N; Heymann, Michael

    2018-04-24

    This protocol describes fabricating microfluidic devices with low X-ray background optimized for goniometer based fixed target serial crystallography. The devices are patterned from epoxy glue using soft lithography and are suitable for in situ X-ray diffraction experiments at room temperature. The sample wells are lidded on both sides with polymeric polyimide foil windows that allow diffraction data collection with low X-ray background. This fabrication method is undemanding and inexpensive. After the sourcing of a SU-8 master wafer, all fabrication can be completed outside of a cleanroom in a typical research lab environment. The chip design and fabrication protocol utilize capillary valving to microfluidically split an aqueous reaction into defined nanoliter sized droplets. This loading mechanism avoids the sample loss from channel dead-volume and can easily be performed manually without using pumps or other equipment for fluid actuation. We describe how isolated nanoliter sized drops of protein solution can be monitored in situ by dynamic light scattering to control protein crystal nucleation and growth. After suitable crystals are grown, complete X-ray diffraction datasets can be collected using goniometer based in situ fixed target serial X-ray crystallography at room temperature. The protocol provides custom scripts to process diffraction datasets using a suite of software tools to solve and refine the protein crystal structure. This approach avoids the artefacts possibly induced during cryo-preservation or manual crystal handling in conventional crystallography experiments. We present and compare three protein structures that were solved using small crystals with dimensions of approximately 10-20 µm grown in chip. By crystallizing and diffracting in situ, handling and hence mechanical disturbances of fragile crystals is minimized. The protocol details how to fabricate a custom X-ray transparent microfluidic chip suitable for in situ serial crystallography. As almost every crystal can be used for diffraction data collection, these microfluidic chips are a very efficient crystal delivery method.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pawellek, Nicole; Krivov, Alexander V.; Marshall, Jonathan P.

    The radii of debris disks and the sizes of their dust grains are important tracers of the planetesimal formation mechanisms and physical processes operating in these systems. Here we use a representative sample of 34 debris disks resolved in various Herschel Space Observatory (Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA) programs to constrain the disk radii and the size distribution of their dust. While we modeled disks with both warm and cold components, and identified warm inner disks around about two-thirds of the stars, we focusmore » our analysis only on the cold outer disks, i.e., Kuiper-belt analogs. We derive the disk radii from the resolved images and find a large dispersion for host stars of any spectral class, but no significant trend with the stellar luminosity. This argues against ice lines as a dominant player in setting the debris disk sizes, since the ice line location varies with the luminosity of the central star. Fixing the disk radii to those inferred from the resolved images, we model the spectral energy distribution to determine the dust temperature and the grain size distribution for each target. While the dust temperature systematically increases toward earlier spectral types, the ratio of the dust temperature to the blackbody temperature at the disk radius decreases with the stellar luminosity. This is explained by a clear trend of typical sizes increasing toward more luminous stars. The typical grain sizes are compared to the radiation pressure blowout limit s {sub blow} that is proportional to the stellar luminosity-to-mass ratio and thus also increases toward earlier spectral classes. The grain sizes in the disks of G- to A-stars are inferred to be several times s {sub blow} at all stellar luminosities, in agreement with collisional models of debris disks. The sizes, measured in the units of s {sub blow}, appear to decrease with the luminosity, which may be suggestive of the disk's stirring level increasing toward earlier-type stars. The dust opacity index β ranges between zero and two, and the size distribution index q varies between three and five for all the disks in the sample.« less

  20. Enzyme specificity under dynamic control

    NASA Astrophysics Data System (ADS)

    Ota, Nobuyuki; Agard, David A.

    2002-03-01

    The contributions of conformational dynamics to substrate specificity have been examined by the application of principal component analysis to molecular dynamics trajectories of alpha-lytic protease. The wild-type alpha-lytic protease is highly specific for substrates with small hydrophobic side chains at the specificity pocket, while the Met190Ala binding pocket mutant has a much broader specificity, actively hydrolyzing substrates ranging from Ala to Phe. We performed a principal component analysis using 1-nanosecond molecular dynamics simulations using solvent boundary condition. We found that the walls of the wild-type substrate binding pocket move in tandem with one another, causing the pocket size to remain fixed so that only small substrates are recognized. In contrast, the M190A mutant shows uncoupled movement of the binding pocket walls, allowing the pocket to sample both smaller and larger sizes, which appears to be the cause of the observed broad specificity. The results suggest that the protein dynamics of alpha-lytic protease may play a significant role in defining the patterns of substrate specificity.

  1. One-Dimension Diffusion Preparation of Concentration-Gradient Fe₂O₃/SiO₂ Aerogel.

    PubMed

    Zhang, Ting; Wang, Haoran; Zhou, Bin; Ji, Xiujie; Wang, Hongqiang; Du, Ai

    2018-06-21

    Concentration-gradient Fe₂O₃/SiO₂ aerogels were prepared by placing an MTMS (methyltrimethoxysilane)-derived SiO₂ aerogel on an iron gauze with an HCl atmosphere via one-dimensional diffusion, ammonia-atmosphere fixing, supercritical fluid drying and thermal treatment. The energy dispersive spectra show that the Fe/Si molar ratios change gradually from 2.14% to 18.48% with a height of 40 mm. Pore-size distribution results show that the average pore size of the sample decreases from 15.8 nm to 3.1 nm after diffusion. This corresponds well with TEM results, indicating a pore-filling effect of the Fe compound. In order to precisely control the gradient, diffusion kinetics are further studied by analyzing the influence of time and position on the concentration of the wet gel. At last, it is found that the diffusion process could be fitted well with the one-dimensional model of Fick’s second law, demonstrating the feasibility of the precise design and control of the concentration gradient.

  2. Analysis of multinomial models with unknown index using data augmentation

    USGS Publications Warehouse

    Royle, J. Andrew; Dorazio, R.M.; Link, W.A.

    2007-01-01

    Multinomial models with unknown index ('sample size') arise in many practical settings. In practice, Bayesian analysis of such models has proved difficult because the dimension of the parameter space is not fixed, being in some cases a function of the unknown index. We describe a data augmentation approach to the analysis of this class of models that provides for a generic and efficient Bayesian implementation. Under this approach, the data are augmented with all-zero detection histories. The resulting augmented dataset is modeled as a zero-inflated version of the complete-data model where an estimable zero-inflation parameter takes the place of the unknown multinomial index. Interestingly, data augmentation can be justified as being equivalent to imposing a discrete uniform prior on the multinomial index. We provide three examples involving estimating the size of an animal population, estimating the number of diabetes cases in a population using the Rasch model, and the motivating example of estimating the number of species in an animal community with latent probabilities of species occurrence and detection.

  3. Water-quality assessment of the eastern Iowa basins- nitrogen, phosphorus, suspended sediment, and organic carbon in surface water, 1996-98

    USGS Publications Warehouse

    Becher, Kent D.; Kalkhoff, Stephen J.; Schnoebelen, Douglas J.; Barnes, Kimberlee K.; Miller, Von E.

    2001-01-01

    Synoptic samples collected during low and high base flow had nitrogen, phosphorus, and organic-carbon concentrations that varied spatially and seasonally. Comparisons of water-quality data from six basic-fixed sampling sites and 19 other synoptic sites suggest that the water-quality data from basic-fixed sampling sites were representative of the entire study unit during periods of low and high base flow when most streamflow originates from ground water.

  4. An ethanol-based fixation method for anatomical and micro-morphological characterization of leaves of various tree species.

    PubMed

    Chieco, C; Rotondi, A; Morrone, L; Rapparini, F; Baraldi, R

    2013-02-01

    The use of formalin constitutes serious health hazards for laboratory workers. We investigated the suitability and performance of the ethanol-based fixative, FineFIX, as a substitute for formalin for anatomical and cellular structure investigations of leaves by light microscopy and for leaf surface and ultrastructural analysis by scanning electron microscopy (SEM). We compared the anatomical features of leaf materials prepared using conventional formalin fixation with the FineFIX. Leaves were collected from ornamental tree species commonly used in urban areas. FineFIX was also compared with glutaraldehyde fixation and air drying normally used for scanning electron microscopy to develop a new method for evaluating leaf morphology and microstructure in three ornamental tree species. The cytological features of the samples processed for histological analysis were well preserved by both fixatives as demonstrated by the absence of nuclear swelling or shrinkage, cell wall detachment or tissue flaking, and good presentation of cytoplasmic vacuolization. In addition, good preservation of surface details and the absence of shrinkage artefacts confirmed the efficacy of FineFIX fixation for SEM analysis. Cuticular wax was preserved only in air dried samples. Samples treated with chemical substances during the fixation and dehydration phases showed various alterations of the wax structures. In some air dried samples a loss of turgidity of the cells was observed that caused general wrinkling of the epidermal surfaces. Commercial FineFIX is an adequate substitute for formalin in histology and it can be applied successfully also for SEM investigation, while reducing the health risks of glutaraldehyde or other toxic fixatives. To investigate the potential for plants to absorb and capture particulates in air, which requires preservation of the natural morphology of trichomes and epicuticular waxes, a combination of FineFIX fixation and air drying is recommended.

  5. Perceived beauty of random texture patterns: A preference for complexity.

    PubMed

    Friedenberg, Jay; Liby, Bruce

    2016-07-01

    We report two experiments on the perceived aesthetic quality of random density texture patterns. In each experiment a square grid was filled with a progressively larger number of elements. Grid size in Experiment 1 was 10×10 with elements added to create a variety of textures ranging from 10%-100% fill levels. Participants rated the beauty of the patterns. Average judgments across all observers showed an inverted U-shaped function that peaked near middle densities. In Experiment 2 grid size was increased to 15×15 to see if observers preferred patterns with a fixed density or a fixed number of elements. The results of the second experiment were nearly identical to that of the first showing a preference for density over fixed element number. Ratings in both studies correlated positively with a GIF compression metric of complexity and with edge length. Within the range of stimuli used, observers judge more complex patterns to be more beautiful. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Renormalization-group theory for finite-size scaling in extreme statistics

    NASA Astrophysics Data System (ADS)

    Györgyi, G.; Moloney, N. R.; Ozogány, K.; Rácz, Z.; Droz, M.

    2010-04-01

    We present a renormalization-group (RG) approach to explain universal features of extreme statistics applied here to independent identically distributed variables. The outlines of the theory have been described in a previous paper, the main result being that finite-size shape corrections to the limit distribution can be obtained from a linearization of the RG transformation near a fixed point, leading to the computation of stable perturbations as eigenfunctions. Here we show details of the RG theory which exhibit remarkable similarities to the RG known in statistical physics. Besides the fixed points explaining universality, and the least stable eigendirections accounting for convergence rates and shape corrections, the similarities include marginally stable perturbations which turn out to be generic for the Fisher-Tippett-Gumbel class. Distribution functions containing unstable perturbations are also considered. We find that, after a transitory divergence, they return to the universal fixed line at the same or at a different point depending on the type of perturbation.

  7. The effects of geography on domestic fixed and broadcasting satellite systems in ITU Region 2

    NASA Technical Reports Server (NTRS)

    Sawitz, P. H.

    1980-01-01

    The paper discusses the effects of geography on service arcs and on the various techniques used to achieve frequency reuse and applies the results to the domestic fixed and broadcasting satellite systems of International Telecommunication Union (ITU) Region 2. The effects of an arc latitude, size, and shape are considered. Earth-station and satellite antenna discrimination is outlined.

  8. Application of asymmetric flow-field flow fractionation to the characterization of colloidal dispersions undergoing aggregation.

    PubMed

    Lattuada, Marco; Olivo, Carlos; Gauer, Cornelius; Storti, Giuseppe; Morbidelli, Massimo

    2010-05-18

    The characterization of complex colloidal dispersions is a relevant and challenging problem in colloidal science. In this work, we show how asymmetric flow-field flow fractionation (AF4) coupled to static light scattering can be used for this purpose. As an example of complex colloidal dispersions, we have chosen two systems undergoing aggregation. The first one is a conventional polystyrene latex undergoing reaction-limited aggregation, which leads to the formation of fractal clusters with well-known structure. The second one is a dispersion of elastomeric colloidal particles made of a polymer with a low glass transition temperature, which undergoes coalescence upon aggregation. Samples are withdrawn during aggregation at fixed times, fractionated with AF4 using a two-angle static light scattering unit as a detector. We have shown that from the analysis of the ratio between the intensities of the scattered light at the two angles the cluster size distribution can be recovered, without any need for calibration based on standard elution times, provided that the geometry and scattering properties of particles and clusters are known. The nonfractionated samples have been characterized also by conventional static and dynamic light scattering to determine their average radius of gyration and hydrodynamic radius. The size distribution of coalescing particles has been investigated also through image analysis of cryo-scanning electron microscopy (SEM) pictures. The average radius of gyration and the average hydrodynamic radius of the nonfractionated samples have been calculated and successfully compared to the values obtained from the size distributions measured by AF4. In addition, the data obtained are also in good agreement with calculations made with population balance equations.

  9. Hierarchical multimodal tomographic x-ray imaging at a superbend

    NASA Astrophysics Data System (ADS)

    Stampanoni, M.; Marone, F.; Mikuljan, G.; Jefimovs, K.; Trtik, P.; Vila-Comamala, J.; David, C.; Abela, R.

    2008-08-01

    Over the last decade, synchrotron-based X-ray tomographic microscopy has established itself as a fundamental tool for non-invasive, quantitative investigations of a broad variety of samples, with application ranging from space research and materials science to biology and medicine. Thanks to the brilliance of modern third generation sources, voxel sizes in the micrometer range are routinely achieved by the major X-ray microtomography devices around the world, while the isotropic 100 nm barrier is reached and trespassed only by few instruments. The beamline for TOmographic Microscopy and Coherent rAdiology experiments (TOMCAT) of the Swiss Light Source at the Paul Scherrer Institut, operates a multimodal endstation which offers tomographic capabilities in the micrometer range in absorption contrast - of course - as well as phase contrast imaging. Recently, the beamline has been equipped with a full field, hard X-rays microscope with a theoretical pixel size down to 30 nm and a field of view of 50 microns. The nanoscope performs well at X-ray energies between 8 and 12 keV, selected from the white beam of a 2.9 T superbend by a [Ru/C]100 fixed exit multilayer monochromator. In this work we illustrate the experimental setup dedicated to the nanoscope, in particular the ad-hoc designed X-ray optics needed to produce a homogeneous, square illumination of the sample imaging plane as well as the magnifying zone plate. Tomographic reconstructions at 60 nm voxel size will be shown and discussed.

  10. Mixed effects versus fixed effects modelling of binary data with inter-subject variability.

    PubMed

    Murphy, Valda; Dunne, Adrian

    2005-04-01

    The question of whether or not a mixed effects model is required when modelling binary data with inter-subject variability and within subject correlation was reported in this journal by Yano et al. (J. Pharmacokin. Pharmacodyn. 28:389-412 [2001]). That report used simulation experiments to demonstrate that, under certain circumstances, the use of a fixed effects model produced more accurate estimates of the fixed effect parameters than those produced by a mixed effects model. The Laplace approximation to the likelihood was used when fitting the mixed effects model. This paper repeats one of those simulation experiments, with two binary observations recorded for every subject, and uses both the Laplace and the adaptive Gaussian quadrature approximations to the likelihood when fitting the mixed effects model. The results show that the estimates produced using the Laplace approximation include a small number of extreme outliers. This was not the case when using the adaptive Gaussian quadrature approximation. Further examination of these outliers shows that they arise in situations in which the Laplace approximation seriously overestimates the likelihood in an extreme region of the parameter space. It is also demonstrated that when the number of observations per subject is increased from two to three, the estimates based on the Laplace approximation no longer include any extreme outliers. The root mean squared error is a combination of the bias and the variability of the estimates. Increasing the sample size is known to reduce the variability of an estimator with a consequent reduction in its root mean squared error. The estimates based on the fixed effects model are inherently biased and this bias acts as a lower bound for the root mean squared error of these estimates. Consequently, it might be expected that for data sets with a greater number of subjects the estimates based on the mixed effects model would be more accurate than those based on the fixed effects model. This is borne out by the results of a further simulation experiment with an increased number of subjects in each set of data. The difference in the interpretation of the parameters of the fixed and mixed effects models is discussed. It is demonstrated that the mixed effects model and parameter estimates can be used to estimate the parameters of the fixed effects model but not vice versa.

  11. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent.

    PubMed

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-07

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  12. Analysis of bulk arrival queueing system with batch size dependent service and working vacation

    NASA Astrophysics Data System (ADS)

    Niranjan, S. P.; Indhira, K.; Chandrasekaran, V. M.

    2018-04-01

    This paper concentrates on single server bulk arrival queue system with batch size dependent service and working vacation. The server provides service in two service modes depending upon the queue length. The server provides single service if the queue length is at least `a'. On the other hand the server provides fixed batch service if the queue length is at least `k' (k > a). Batch service is provided with some fixed batch size `k'. After completion of service if the queue length is less than `a' then the server leaves for working vacation. During working vacation customers are served with lower service rate than the regular service rate. Service during working vacation also contains two service modes. For the proposed model probability generating function of the queue length at an arbitrary time will be obtained by using supplementary variable technique. Some performance measures will also be presented with suitable numerical illustrations.

  13. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent

    NASA Astrophysics Data System (ADS)

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-01

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  14. [Proximate analysis of straw by near infrared spectroscopy (NIRS)].

    PubMed

    Huang, Cai-jin; Han, Lu-jia; Liu, Xian; Yang, Zeng-ling

    2009-04-01

    Proximate analysis is one of the routine analysis procedures in utilization of straw for biomass energy use. The present paper studied the applicability of rapid proximate analysis of straw by near infrared spectroscopy (NIRS) technology, in which the authors constructed the first NIRS models to predict volatile matter and fixed carbon contents of straw. NIRS models were developed using Foss 6500 spectrometer with spectra in the range of 1,108-2,492 nm to predict the contents of moisture, ash, volatile matter and fixed carbon in the directly cut straw samples; to predict ash, volatile matter and fixed carbon in the dried milled straw samples. For the models based on directly cut straw samples, the determination coefficient of independent validation (R2v) and standard error of prediction (SEP) were 0.92% and 0.76% for moisture, 0.94% and 0.84% for ash, 0.88% and 0.82% for volatile matter, and 0.75% and 0.65% for fixed carbon, respectively. For the models based on dried milled straw samples, the determination coefficient of independent validation (R2v) and standard error of prediction (SEP) were 0.98% and 0.54% for ash, 0.95% and 0.57% for volatile matter, and 0.78% and 0.61% for fixed carbon, respectively. It was concluded that NIRS models can predict accurately as an alternative analysis method, therefore rapid and simultaneous analysis of multicomponents can be achieved by NIRS technology, decreasing the cost of proximate analysis for straw.

  15. CONTRAST BETWEEN OSMIUM-FIXED AND PERMANGANATE-FIXED TOAD SPINAL GANGLIA

    PubMed Central

    Rosenbluth, Jack

    1963-01-01

    Chains of vesicles are prominent near the plasma membranes of both the neurons and satellite cells of osmium-fixed toad spinal ganglia. In permanganate-fixed specimens, however, such vesicles are absent, and in their place are continuous invaginations of the plasma membranes of these cells. The discrepancy suggests that the serried vesicles seen in osmium-fixed preparations arise through disintegration of plasma membrane invaginations, and do not represent active pinocytosis, as has been suggested previously. A second difference between ganglia fixed by these two methods is that rows of small, disconnected cytoplasmic globules occur in the sheaths of permanganate-fixed ganglia, but not in osmium-fixed samples. It is suggested that these globules arise from the breakdown of thin sheets of satellite cell cytoplasm which occur as continuous lamellae in osmium-fixed specimens. Possible mechanisms of these membrane reorganizations, and the relevance of these findings to other tissues, are discussed. PMID:13990905

  16. On the size of sports fields

    NASA Astrophysics Data System (ADS)

    Darbois Texier, Baptiste; Cohen, Caroline; Dupeux, Guillaume; Quéré, David; Clanet, Christophe

    2014-03-01

    The size of sports fields considerably varies from a few meters for table tennis to hundreds of meters for golf. We first show that this size is mainly fixed by the range of the projectile, that is, by the aerodynamic properties of the ball (mass, surface, drag coefficient) and its maximal velocity in the game. This allows us to propose general classifications for sports played with a ball.

  17. Effects of Class Size on Alternative Educational Outcomes across Disciplines

    ERIC Educational Resources Information Center

    Cheng, Dorothy A.

    2011-01-01

    This is the first study to use self-reported ratings of student learning, instructor recommendations, and course recommendations as the outcome measure to estimate class size effects, doing so across 24 disciplines. Fixed-effects models controlling for heterogeneous courses and instructors reveal that increasing enrollment has negative and…

  18. A Bayesian Nonparametric Meta-Analysis Model

    ERIC Educational Resources Information Center

    Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.

    2015-01-01

    In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…

  19. Interplay among Coating Thickness, Strip Size, and Thermal and Solidification Characteristics in A356 Lost Foam Casting Alloy

    NASA Astrophysics Data System (ADS)

    Shabestari, S. G.; Divandari, M.; Ghoncheh, M. H.; Jamali, V.

    2017-10-01

    The aim of this research was evaluation of the solidification parameters of A356 alloy, e.g., dendrite arm spacing (DAS), correlation between cooling rate (CR) and DAS, hot tearing, and microstructural analysis at different coating thicknesses and strip sizes during the lost foam casting process (LFC). To achieve this goal, the DAS was measured at six coating thicknesses and six different strip sizes. In addition, thermal characteristics, such as the CR, temperatures of start and finish points of solidification, recalescence undercooling, and hot tearing susceptibility (HCSC), at five coating thicknesses were recognized from the cooling curves and their first derivative and the solid fraction curves, which have been plotted through the thermal analysis technique. The pouring temperature and strip size were fixed at 1063 K (790 °C) and 12 mm, respectively. Besides, to derive a numerical equation to predict the CR by measuring the DAS in this alloy, a microstructural evaluation was carried out on samples cast through 12-mm strip size. The results showed that both coating thickness and strip size had similar influences on the DAS, in which, by retaining one parameter at a constant value and simultaneous enhancement in the other parameter, the DAS increased significantly. Furthermore, at thinner coating layer, the higher amount of the CR was observed, which caused reduction in the temperatures of both the start and finish points of solidification. Also, increasing the CR caused a nonlinear increase in both the recalescence undercooling and the HCSC.

  20. Targeted or whole genome sequencing of formalin fixed tissue samples: potential applications in cancer genomics.

    PubMed

    Munchel, Sarah; Hoang, Yen; Zhao, Yue; Cottrell, Joseph; Klotzle, Brandy; Godwin, Andrew K; Koestler, Devin; Beyerlein, Peter; Fan, Jian-Bing; Bibikova, Marina; Chien, Jeremy

    2015-09-22

    Current genomic studies are limited by the poor availability of fresh-frozen tissue samples. Although formalin-fixed diagnostic samples are in abundance, they are seldom used in current genomic studies because of the concern of formalin-fixation artifacts. Better characterization of these artifacts will allow the use of archived clinical specimens in translational and clinical research studies. To provide a systematic analysis of formalin-fixation artifacts on Illumina sequencing, we generated 26 DNA sequencing data sets from 13 pairs of matched formalin-fixed paraffin-embedded (FFPE) and fresh-frozen (FF) tissue samples. The results indicate high rate of concordant calls between matched FF/FFPE pairs at reference and variant positions in three commonly used sequencing approaches (whole genome, whole exome, and targeted exon sequencing). Global mismatch rates and C · G > T · A substitutions were comparable between matched FF/FFPE samples, and discordant rates were low (<0.26%) in all samples. Finally, low-pass whole genome sequencing produces similar pattern of copy number alterations between FF/FFPE pairs. The results from our studies suggest the potential use of diagnostic FFPE samples for cancer genomic studies to characterize and catalog variations in cancer genomes.

  1. Preliminary CFD study of Pebble Size and its Effect on Heat Transfer in a Pebble Bed Reactor

    NASA Astrophysics Data System (ADS)

    Jones, Andrew; Enriquez, Christian; Spangler, Julian; Yee, Tein; Park, Jungkyu; Farfan, Eduardo

    2017-11-01

    In pebble bed reactors, the typical pebble diameter used is 6cm, and within each pebble is are thousands of nuclear fuel kernels. However, efficiency of the reactor does not solely depend on the number of kernels of fuel within each graphite sphere, but also depends on the type and motion of the coolant within the voids between the spheres and the reactor itself. In this work a physical analysis of the pebble bed nuclear reactor's fluid dynamics is undertaken using Computational Fluid Dynamics software. The primary goal of this work is to observe the relationship between the different pebble diameters in an idealized alignment and the thermal transport efficiency of the reactor. The model constructed of our idealized argument will consist on stacked 8 pebble columns that fixed at the inlet on the reactor. Two different pebble sizes 4 cm and 6 cm will be studied and helium will be supplied as coolant with a fixed flow rate of 96 kg/s, also a fixed pebble surface temperatures will be used. Comparison will then be made to evaluate the efficiency of coolant to transport heat due to the varying sizes of the pebbles. Assistant Professor for the Department of Civil and Construction Engineering PhD.

  2. Vertical Object Layout and Compression for Fixed Heaps

    NASA Astrophysics Data System (ADS)

    Titzer, Ben L.; Palsberg, Jens

    Research into embedded sensor networks has placed increased focus on the problem of developing reliable and flexible software for microcontroller-class devices. Languages such as nesC [10] and Virgil [20] have brought higher-level programming idioms to this lowest layer of software, thereby adding expressiveness. Both languages are marked by the absence of dynamic memory allocation, which removes the need for a runtime system to manage memory. While nesC offers code modules with statically allocated fields, arrays and structs, Virgil allows the application to allocate and initialize arbitrary objects during compilation, producing a fixed object heap for runtime. This paper explores techniques for compressing fixed object heaps with the goal of reducing the RAM footprint of a program. We explore table-based compression and introduce a novel form of object layout called vertical object layout. We provide experimental results that measure the impact on RAM size, code size, and execution time for a set of Virgil programs. Our results show that compressed vertical layout has better execution time and code size than table-based compression while achieving more than 20% heap reduction on 6 of 12 benchmark programs and 2-17% heap reduction on the remaining 6. We also present a formalization of vertical object layout and prove tight relationships between three styles of object layout.

  3. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Jaehyung; Wagner, Lucas K.; Ertekin, Elif, E-mail: ertekin@illinois.edu

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlledmore » and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.« less

  4. An Evaluation of the Gap Sizes of 3-Unit Fixed Dental Prostheses Milled from Sintering Metal Blocks.

    PubMed

    Jung, Jae-Kwan

    2017-01-01

    This study assessed the clinical acceptability of sintering metal-fabricated 3-unit fixed dental prostheses (FDPs) based on gap sizes. Ten specimens were prepared on research models by milling sintering metal blocks or by the lost-wax technique (LWC group). Gap sizes were assessed at 12 points per abutment (premolar and molar), 24 points per specimen (480 points in a total in 20 specimens). The measured points were categorized as marginal, axial wall, and occlusal for assessment in a silicone replica. The silicone replica was cut through the mesiodistal and buccolingual center. The four sections were magnified at 160x, and the thickness of the light body silicone was measured to determine the gap size, and gap size means were compared. For the premolar part, the mean (standard deviation) gap size was nonsignificantly ( p = 0.139) smaller in the SMB group (68.6 ± 35.6  μ m) than in the LWC group (69.6 ± 16.9  μ m). The mean molar gap was nonsignificantly smaller ( p = 0.852) in the LWC (73.9 ± 25.6  μ m) than in the SMB (78.1 ± 37.4  μ m) group. The gap sizes were similar between the two groups. Because the gap sizes were within the previously proposed clinically accepted limit, FDPs prepared by sintered metal block milling are clinically acceptable.

  5. Robust organelle size extractions from elastic scattering measurements of single cells (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Cannaday, Ashley E.; Draham, Robert; Berger, Andrew J.

    2016-04-01

    The goal of this project is to estimate non-nuclear organelle size distributions in single cells by measuring angular scattering patterns and fitting them with Mie theory. Simulations have indicated that the large relative size distribution of organelles (mean:width≈2) leads to unstable Mie fits unless scattering is collected at polar angles less than 20 degrees. Our optical system has therefore been modified to collect angles down to 10 degrees. Initial validations will be performed on polystyrene bead populations whose size distributions resemble those of cell organelles. Unlike with the narrow bead distributions that are often used for calibration, we expect to see an order-of-magnitude improvement in the stability of the size estimates as the minimum angle decreases from 20 to 10 degrees. Scattering patterns will then be acquired and analyzed from single cells (EMT6 mouse cancer cells), both fixed and live, at multiple time points. Fixed cells, with no changes in organelle sizes over time, will be measured to determine the fluctuation level in estimated size distribution due to measurement imperfections alone. Subsequent measurements on live cells will determine whether there is a higher level of fluctuation that could be attributed to dynamic changes in organelle size. Studies on unperturbed cells are precursors to ones in which the effects of exogenous agents are monitored over time.

  6. An Evaluation of the Gap Sizes of 3-Unit Fixed Dental Prostheses Milled from Sintering Metal Blocks

    PubMed Central

    2017-01-01

    This study assessed the clinical acceptability of sintering metal-fabricated 3-unit fixed dental prostheses (FDPs) based on gap sizes. Ten specimens were prepared on research models by milling sintering metal blocks or by the lost-wax technique (LWC group). Gap sizes were assessed at 12 points per abutment (premolar and molar), 24 points per specimen (480 points in a total in 20 specimens). The measured points were categorized as marginal, axial wall, and occlusal for assessment in a silicone replica. The silicone replica was cut through the mesiodistal and buccolingual center. The four sections were magnified at 160x, and the thickness of the light body silicone was measured to determine the gap size, and gap size means were compared. For the premolar part, the mean (standard deviation) gap size was nonsignificantly (p = 0.139) smaller in the SMB group (68.6 ± 35.6 μm) than in the LWC group (69.6 ± 16.9 μm). The mean molar gap was nonsignificantly smaller (p = 0.852) in the LWC (73.9 ± 25.6 μm) than in the SMB (78.1 ± 37.4 μm) group. The gap sizes were similar between the two groups. Because the gap sizes were within the previously proposed clinically accepted limit, FDPs prepared by sintered metal block milling are clinically acceptable. PMID:28246605

  7. Determination of the lowest concentrations of aldehyde fixatives for completely fixing various cellular structures by real-time imaging and quantification.

    PubMed

    Zeng, Fangfa; Yang, Wen; Huang, Jie; Chen, Yuan; Chen, Yong

    2013-05-01

    The effectiveness of fixatives for fixing biological specimens has long been widely investigated. However, the lowest concentrations of fixatives needed to completely fix whole cells or various cellular structures remain unclear. Using real-time imaging and quantification, we determined the lowest concentrations of glutaraldehyde (0.001-0.005, ~0.005, 0.01-005, 0.01-005, and 0.01-0.1 %) and formaldehyde/paraformaldehyde (0.01-0.05, ~0.05, 0.5-1, 1-1.5, and 0.5-1 %) required to completely fix focal adhesions, cell-surface particles, stress fibers, the cell cortex, and the inner structures of human umbilical vein endothelial cells within 20 min. With prolonged fixation times (>20 min), the concentration of fixative required to completely fix these structures will shift to even lower values. These data may help us understand and optimize fixation protocols and understand the potential effects of the small quantities of endogenously generated aldehydes in human cells. We also determined the lowest concentration of glutaraldehyde (0.5 %) and formaldehyde/paraformaldehyde (2 %) required to induce cell blebbing. We found that the average number and size of the fixation-induced blebs per cell were dependent on both fixative concentration and cell spread area, but were independent of temperature. These data provide important information for understanding cell blebbing, and may help optimize the vesiculation-based technique used to isolate plasma membrane by suggesting ways of controlling the number or size of fixation-induced cell blebs.

  8. Use of synchrotron tomography to image naturalistic anatomy in insects

    NASA Astrophysics Data System (ADS)

    Socha, John J.; De Carlo, Francesco

    2008-08-01

    Understanding the morphology of anatomical structures is a cornerstone of biology. For small animals, classical methods such as histology have provided a wealth of data, but such techniques can be problematic due to destruction of the sample. More importantly, fixation and physical slicing can cause deformation of anatomy, a critical limitation when precise three-dimensional data are required. Modern techniques such as confocal microscopy, MRI, and tabletop x-ray microCT provide effective non-invasive methods, but each of these tools each has limitations including sample size constraints, resolution limits, and difficulty visualizing soft tissue. Our research group at the Advanced Photon Source (Argonne National Laboratory) studies physiological processes in insects, focusing on the dynamics of breathing and feeding. To determine the size, shape, and relative location of internal anatomy in insects, we use synchrotron microtomography at the beamline 2-BM to image structures including tracheal tubes, muscles, and gut. Because obtaining naturalistic, undeformed anatomical information is a key component of our studies, we have developed methods to image fresh and non-fixed whole animals and tissues. Although motion artifacts remain a problem, we have successfully imaged multiple species including beetles, ants, fruit flies, and butterflies. Here we discuss advances in biological imaging and highlight key findings in insect morphology.

  9. A semi-flexible model prediction for the polymerization force exerted by a living F-actin filament on a fixed wall

    NASA Astrophysics Data System (ADS)

    Pierleoni, Carlo; Ciccotti, Giovanni; Ryckaert, Jean-Paul

    2015-10-01

    We consider a single living semi-flexible filament with persistence length ℓp in chemical equilibrium with a solution of free monomers at fixed monomer chemical potential μ1 and fixed temperature T. While one end of the filament is chemically active with single monomer (de)polymerization steps, the other end is grafted normally to a rigid wall to mimic a rigid network from which the filament under consideration emerges. A second rigid wall, parallel to the grafting wall, is fixed at distance L < < ℓp from the filament seed. In supercritical conditions where monomer density ρ1 is higher than the critical density ρ1c, the filament tends to polymerize and impinges onto the second surface which, in suitable conditions (non-escaping filament regime), stops the filament growth. We first establish the grand-potential Ω(μ1, T, L) of this system treated as an ideal reactive mixture, and derive some general properties, in particular the filament size distribution and the force exerted by the living filament on the obstacle wall. We apply this formalism to the semi-flexible, living, discrete Wormlike chain model with step size d and persistence length ℓp, hitting a hard wall. Explicit properties require the computation of the mean force f ¯ i ( L ) exerted by the wall at L and associated potential f ¯ i ( L ) = - d W i ( L ) / d L on a filament of fixed size i. By original Monte-Carlo calculations for few filament lengths in a wide range of compression, we justify the use of the weak bending universal expressions of Gholami et al. [Phys. Rev. E 74, 041803 (2006)] over the whole non-escaping filament regime. For a filament of size i with contour length Lc = (i - 1) d, this universal form is rapidly growing from zero (non-compression state) to the buckling value f b ( L c , ℓ p ) = /π 2 k B T ℓ p 4 Lc 2 over a compression range much narrower than the size d of a monomer. Employing this universal form for living filaments, we find that the average force exerted by a living filament on a wall at distance L is in practice L independent and very close to the value of the stalling force Fs H = ( k B T / d ) ln ( ρ ˆ 1 ) predicted by Hill, this expression being strictly valid in the rigid filament limit. The average filament force results from the product of the cumulative size fraction x = x ( L , ℓ p , ρ ˆ 1 ) , where the filament is in contact with the wall, times the buckling force on a filament of size Lc ≈ L, namely, Fs H = x f b ( L ; ℓ p ) . The observed L independence of Fs H implies that x ∝ L-2 for given ( ℓ p , ρ ˆ 1 ) and x ∝ ln ρ ˆ 1 for given (ℓp, L). At fixed ( L , ρ ˆ 1 ), one also has x ∝ ℓp - 1 which indicates that the rigid filament limit ℓp → ∞ is a singular limit in which an infinite force has zero weight. Finally, we derive the physically relevant threshold for filament escaping in the case of actin filaments.

  10. Performance of intraclass correlation coefficient (ICC) as a reliability index under various distributions in scale reliability studies.

    PubMed

    Mehta, Shraddha; Bastero-Caballero, Rowena F; Sun, Yijun; Zhu, Ray; Murphy, Diane K; Hardas, Bhushan; Koch, Gary

    2018-04-29

    Many published scale validation studies determine inter-rater reliability using the intra-class correlation coefficient (ICC). However, the use of this statistic must consider its advantages, limitations, and applicability. This paper evaluates how interaction of subject distribution, sample size, and levels of rater disagreement affects ICC and provides an approach for obtaining relevant ICC estimates under suboptimal conditions. Simulation results suggest that for a fixed number of subjects, ICC from the convex distribution is smaller than ICC for the uniform distribution, which in turn is smaller than ICC for the concave distribution. The variance component estimates also show that the dissimilarity of ICC among distributions is attributed to the study design (ie, distribution of subjects) component of subject variability and not the scale quality component of rater error variability. The dependency of ICC on the distribution of subjects makes it difficult to compare results across reliability studies. Hence, it is proposed that reliability studies should be designed using a uniform distribution of subjects because of the standardization it provides for representing objective disagreement. In the absence of uniform distribution, a sampling method is proposed to reduce the non-uniformity. In addition, as expected, high levels of disagreement result in low ICC, and when the type of distribution is fixed, any increase in the number of subjects beyond a moderately large specification such as n = 80 does not have a major impact on ICC. Copyright © 2018 John Wiley & Sons, Ltd.

  11. Development of enteric-coated fixed dose combinations of amorphous solid dispersions of ezetimibe and lovastatin: Investigation of formulation and process parameters.

    PubMed

    Riekes, Manoela K; Dereymaker, Aswin; Berben, Philippe; Augustijns, Patrick; Stulzer, Hellen K; Van den Mooter, Guy

    2017-03-30

    Enteric-coated fixed-dose combinations of ezetimibe and lovastatin were prepared by fluid bed coating aiming to avoid the acidic conversion of lovastatin to its hydroxyacid derivative. In a two-step process, sucrose beads were layered with a glass solution of ezetimibe, lovastatin and Soluplus ® , top-coated with an enteric layer. The impact of different bead size, enteric polymers (Eudragit L100 ® and Eudragit L100-55 ® ) and coating time was investigated. Samples were evaluated by X-ray diffraction, scanning electron microscopy, laser diffraction and in vitro studies in 0.1M HCl and phosphate buffer pH 6.8. Results showed that smaller beads tend to agglomerate and release was jeopardized in acidic conditions, most likely due to irregular coating layer. Eudragit L100-55 ® required longer processing, but thinner coating layers provided lower drug release. Both polymers showed low drug release in acidic environment and fast release at pH 6.8. The off-line measurement of the coating thickness determined the ideal coating time as 15 and 30min for Eudragit L100-55 ® and Eudragit L100 ® -based samples, respectively. Both compounds were molecularly dispersed in Soluplus ® , and Eudragit L100 ® formulations showed concave pores on the surface, presenting higher drug release in acidic conditions. Stability studies after 6 months showed unaltered physical properties and drug release. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Biomedical analysis of formalin-fixed, paraffin-embedded tissue samples: The Holy Grail for molecular diagnostics.

    PubMed

    Donczo, Boglarka; Guttman, Andras

    2018-06-05

    More than a century ago in 1893, a revolutionary idea about fixing biological tissue specimens was introduced by Ferdinand Blum, a German physician. Since then, a plethora of fixation methods have been investigated and used. Formalin fixation with paraffin embedment became the most widely used types of fixation and preservation method, due to its proper architectural conservation of tissue structures and cellular shape. The huge collection of formalin-fixed, paraffin-embedded (FFPE) sample archives worldwide holds a large amount of unearthed information about diseases that could be the Holy Grail in contemporary biomarker research utilizing analytical omics based molecular diagnostics. The aim of this review is to critically evaluate the omics options for FFPE tissue sample analysis in the molecular diagnostics field. Copyright © 2018. Published by Elsevier B.V.

  13. THE ROLE OF INHALATORY CORTI-COSTEROIDS AND LONG ACTING *β2 AGONISTS IN THE TREATMENT OF PATIENTS ADMITTED TO HOSPITAL DUE TO ACUTE EXACERBATIONS OF CHRONIC OBSTRUCTIVE PULMONARY DISEASE (AECOPD)

    PubMed Central

    Mehić, Bakir

    2007-01-01

    There is the question about the role of fixed combination of inhalatory corticosteroids and long acting β2 agonists in the treatment of patients admitted in hospital due to AECOPD. The objective of this study is to determine the frequency of etiologic factors of AECOPD, to research the length of recovery time and the time free from exacerbation due to AECOPD at the patients treated with fixed combination inhalers containing F/S versus patients who were not treated with this combination. This is retrospective-prospective, randomized, clinical study with a sample size of 70 patients who admitted to hospital due to AECOPD type I or II. Patients are randomized in two groups. Prospective group from 36 patients have been treated with oral or parenteral corticosteroids 7 - 14 days, other medications and fixed combination inhalers containing a F/S. Second, retrospective group from 34 patients have been treated with oral or parenteral corticosteroids 7 - 14 days (in time when we didn’t have fixed combination inhalers containing a F/S) and other medications. In both groups (prospective and retrospective) the most frequent etiological factors of AECOPD was bacterial infection, after that viral infection, other factors as well as congestive heart failure. Average recovery time for symptoms of AECOPD was statistically significant shorter in group patients treated with fixed combination inhalers containing F/S (prospective group) than in group treated without this fixed combination. There are also significant differences in average number of days need for recovery in subgroups of patients by etiological factors of AECOPD, except in cases of AECOPD onset because of congestive heart failure. Average free time from exacerbation at the patients treated with fixed combination inhalers was statistically significant longer than in group of patients who were not treated with this combination. In this study has demonstrated the presence of pathogenic bacteria in 53% our patients hospitalized due to AECOPD. There were 26% patients whose exacerbation is signed as viral origin. 11% cases had congestive heart failure. Average recovery time for non-viral AECOPD was 14,8 days and for exacerbations of viral origin 27,4 days. Average free time from exacerbation at the patients treated with fixed combination inhalers containing a F/S was statistically significant longer than in group of patients who were not treated with this combination. There were no statistically significant differences in average number of exacerbation during the year, between observed groups. PMID:18039195

  14. Two Different Views on the World Around Us: The World of Uniformity versus Diversity

    PubMed Central

    Nayakankuppam, Dhananjay

    2016-01-01

    We propose that when individuals believe in fixed traits of personality (entity theorists), they are likely to expect a world of “uniformity.” As such, they easily infer a population statistic from a small sample of data with confidence. In contrast, individuals who believe in malleable traits of personality (incremental theorists) are likely to presume a world of “diversity,” such that they “hesitate” to infer a population statistic from a similarly sized sample. In four laboratory experiments, we found that compared to incremental theorists, entity theorists estimated a population mean from a sample with a greater level of confidence (Studies 1a and 1b), expected more homogeneity among the entities within a population (Study 2), and perceived an extreme value to be more indicative of an outlier (Study 3). These results suggest that individuals are likely to use their implicit self-theory orientations (entity theory versus incremental theory) to see a population in general as a constitution either of homogeneous or heterogeneous entities. PMID:27977788

  15. Microstructure study of direct laser fabricated Ti alloys using powder and wire

    NASA Astrophysics Data System (ADS)

    Wang, Fude; Mei, J.; Wu, Xinhua

    2006-11-01

    A compositionally graded material has been fabricated using direct laser fabrication (DFL). Two types of feedstock were fed simultaneously into the laser focal point, a burn resistant (BurTi) alloy Ti-25V-15Cr-2Al-0.2C powder and a Ti-6Al-4V wire. The local composition of the alloy was changed by altering the ratio of powder to wire by varying the feed rate of the powder whilst maintaining a fixed feed rate of wire-feed. For the range of compositions between about 20% and 100% BurTi only the beta phase was observed and the composition and lattice parameter varied monotonically. The grain size was found to be much finer in these functionally graded samples than in laser fabricated Ti64. Some samples were made using the wire-feed alone, where it was found that the microstructure is different from that found when using powder feed alone. The results are discussed in terms of the power requirements for laser fabrication of powder and wire samples.

  16. A simple procedure for the extraction of DNA from long-term formalin-preserved brain tissues for the detection of EBV by PCR.

    PubMed

    Hassani, Asma; Khan, Gulfaraz

    2015-12-01

    Long-term formalin fixed brain tissues are potentially an important source of material for molecular studies. Ironically, very few protocols have been published describing DNA extraction from such material for use in PCR analysis. In our attempt to investigate the role of Epstein-Barr virus (EBV) in the pathogenesis of multiple sclerosis (MS), extracting PCR quality DNA from brain samples fixed in formalin for 2-22 years, proved to be very difficult and challenging. As expected, DNA extracted from these samples was not only of poor quality and quantity, but more importantly, it was frequently found to be non-amplifiable due to the presence of PCR inhibitors. Here, we describe a simple and reproducible procedure for extracting DNA using a modified proteinase K and phenol-chloroform methodology. Central to this protocol is the thorough pre-digestion washing of the tissues in PBS, extensive digestion with proteinase K in low SDS containing buffer, and using low NaCl concentration during DNA precipitation. The optimized protocol was used in extracting DNA from meninges of 26 MS and 6 non-MS cases. Although the quality of DNA from these samples was generally poor, small size amplicons (100-200 nucleotides) of the house-keeping gene, β-globin could be reliably amplified from all the cases. PCR for EBV revealed positivity in 35% (9/26) MS cases, but 0/6 non-MS cases. These findings indicate that the method described here is suitable for PCR detection of viral sequences in long-term formalin persevered brain tissues. Our findings also support a possible role for EBV in the pathogenesis of MS. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Assessment of Different Sampling Methods for Measuring and Representing Macular Cone Density Using Flood-Illuminated Adaptive Optics.

    PubMed

    Feng, Shu; Gale, Michael J; Fay, Jonathan D; Faridi, Ambar; Titus, Hope E; Garg, Anupam K; Michaels, Keith V; Erker, Laura R; Peters, Dawn; Smith, Travis B; Pennesi, Mark E

    2015-09-01

    To describe a standardized flood-illuminated adaptive optics (AO) imaging protocol suitable for the clinical setting and to assess sampling methods for measuring cone density. Cone density was calculated following three measurement protocols: 50 × 50-μm sampling window values every 0.5° along the horizontal and vertical meridians (fixed-interval method), the mean density of expanding 0.5°-wide arcuate areas in the nasal, temporal, superior, and inferior quadrants (arcuate mean method), and the peak cone density of a 50 × 50-μm sampling window within expanding arcuate areas near the meridian (peak density method). Repeated imaging was performed in nine subjects to determine intersession repeatability of cone density. Cone density montages could be created for 67 of the 74 subjects. Image quality was determined to be adequate for automated cone counting for 35 (52%) of the 67 subjects. We found that cone density varied with different sampling methods and regions tested. In the nasal and temporal quadrants, peak density most closely resembled histological data, whereas the arcuate mean and fixed-interval methods tended to underestimate the density compared with histological data. However, in the inferior and superior quadrants, arcuate mean and fixed-interval methods most closely matched histological data, whereas the peak density method overestimated cone density compared with histological data. Intersession repeatability testing showed that repeatability was greatest when sampling by arcuate mean and lowest when sampling by fixed interval. We show that different methods of sampling can significantly affect cone density measurements. Therefore, care must be taken when interpreting cone density results, even in a normal population.

  18. Assessment of Different Sampling Methods for Measuring and Representing Macular Cone Density Using Flood-Illuminated Adaptive Optics

    PubMed Central

    Feng, Shu; Gale, Michael J.; Fay, Jonathan D.; Faridi, Ambar; Titus, Hope E.; Garg, Anupam K.; Michaels, Keith V.; Erker, Laura R.; Peters, Dawn; Smith, Travis B.; Pennesi, Mark E.

    2015-01-01

    Purpose To describe a standardized flood-illuminated adaptive optics (AO) imaging protocol suitable for the clinical setting and to assess sampling methods for measuring cone density. Methods Cone density was calculated following three measurement protocols: 50 × 50-μm sampling window values every 0.5° along the horizontal and vertical meridians (fixed-interval method), the mean density of expanding 0.5°-wide arcuate areas in the nasal, temporal, superior, and inferior quadrants (arcuate mean method), and the peak cone density of a 50 × 50-μm sampling window within expanding arcuate areas near the meridian (peak density method). Repeated imaging was performed in nine subjects to determine intersession repeatability of cone density. Results Cone density montages could be created for 67 of the 74 subjects. Image quality was determined to be adequate for automated cone counting for 35 (52%) of the 67 subjects. We found that cone density varied with different sampling methods and regions tested. In the nasal and temporal quadrants, peak density most closely resembled histological data, whereas the arcuate mean and fixed-interval methods tended to underestimate the density compared with histological data. However, in the inferior and superior quadrants, arcuate mean and fixed-interval methods most closely matched histological data, whereas the peak density method overestimated cone density compared with histological data. Intersession repeatability testing showed that repeatability was greatest when sampling by arcuate mean and lowest when sampling by fixed interval. Conclusions We show that different methods of sampling can significantly affect cone density measurements. Therefore, care must be taken when interpreting cone density results, even in a normal population. PMID:26325414

  19. Investigations on effects of the hole size to fix electrodes and interconnection lines in polydimethylsiloxane

    NASA Astrophysics Data System (ADS)

    Behkami, Saber; Frounchi, Javad; Ghaderi Pakdel, Firouz; Stieglitz, Thomas

    2017-11-01

    Translational research in bioelectronics medicine and neural implants often relies on established material assemblies made of silicone rubber (polydimethylsiloxane-PDMS) and precious metals. Longevity of the compound is of utmost importance for implantable devices in therapeutic and rehabilitation applications. Therefore, secure mechanical fixation can be used in addition to chemical bonding mechanisms to interlock PDMS substrate and insulation layers with metal sheets for interconnection lines and electrodes. One of the best ways to fix metal lines and electrodes in PDMS is to design holes in electrode rims to allow for direct interconnection between top to bottom layer silicone. Hence, the best layouts and sizes of holes (up to 6) which provide sufficient stability against lateral and vertical forces have been investigated with a variety of numbers of hole in line electrodes, which are simulated and fabricated with different layouts, sizes and materials. Best stability was obtained with radii of 100, 72 and 62 µm, respectively, and a single central hole in aluminum, platinum and MP35N foil line electrodes of 400  ×  500 µm2 size and of thickness 20 µm. The study showed that the best hole size which provides line electrode immobility (of thickness less than 30 µm) within a central hole is proportional to reverse value of Young’s Modulus of the material used. Thus, an array of line electrodes was designed and fabricated to study this effect. Experimental results were compared with simulation data. Subsequently, an approximation curve was generated as design rule to propose the best radius to fix line electrodes according to the material thickness between 10 and 200 µm using PDMS as substrate material.

  20. The local surface plasmon resonance property and refractive index sensitivity of metal elliptical nano-ring arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Weihua, E-mail: linwh-whu@hotmail.com; Wang, Qian; Dong, Anhua

    2014-11-15

    In this paper, we systematically investigate the optical property and refractive index sensitivity (RIS) of metal elliptical nano-ring (MENR) arranged in rectangle lattice by finite-difference time-domain method. Eight kinds of considered MENRs are divided into three classes, namely fixed at the same outer size, at the same inner size, and at the same middle size. All MENR arrays show a bonding mode local surface plasmon resonance (LSPR) peak in the near-infrared region under longitudinal and transverse polarizations, and lattice diffraction enhanced LSPR peaks emerge, when the LSPR peak wavelength (LSPRPW) matches the effective lattice constant of the array. The LSPRPWmore » is determined by the charge moving path length, the parallel and cross interactions induced by the stable distributed charges, and the moving charges inter-attraction. High RIS can be achieved by small particle distance arrays composed of MENRs with big inner size and small ring-width. On the other hand, for a MENR array, the comprehensive RIS (including RIS and figure of merit) under transverse polarization is superior to that under longitudinal polarization. Furthermore, on condition that compared arrays are fixed at the same lattice constant, the phenomenon that the RIS of big ring-width MENR arrays may be higher than that of small ring-width MENR arrays only appears in the case of compared arrays with relatively small lattice constant and composed of MENRs fixed at the same inner size simultaneously. Meanwhile, the LSPRPW of the former MENR arrays is also larger than that of the latter MENR arrays. Our systematic results may help experimentalists work with this type of systems.« less

  1. Test plan for evaluating the operational performance of the prototype nested, fixed-depth fluidic sampler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    REICH, F.R.

    The PHMC will provide Low Activity Wastes (LAW) tank wastes for final treatment by a privatization contractor from two double-shell feed tanks, 241-AP-102 and 241-AP-104. Concerns about the inability of the baseline ''grab'' sampling to provide large volume samples within time constraints has led to the development of a nested, fixed-depth sampling system. This sampling system will provide large volume, representative samples without the environmental, radiation exposure, and sample volume impacts of the current base-line ''grab'' sampling method. A plan has been developed for the cold testing of this nested, fixed-depth sampling system with simulant materials. The sampling system willmore » fill the 500-ml bottles and provide inner packaging to interface with the Hanford Sites cask shipping systems (PAS-1 and/or ''safe-send''). The sampling system will provide a waste stream that will be used for on-line, real-time measurements with an at-tank analysis system. The cold tests evaluate the performance and ability to provide samples that are representative of the tanks' content within a 95 percent confidence interval, to sample while mixing pumps are operating, to provide large sample volumes (1-15 liters) within a short time interval, to sample supernatant wastes with over 25 wt% solids content, to recover from precipitation- and settling-based plugging, and the potential to operate over the 20-year expected time span of the privatization contract.« less

  2. 3D-HST+CANDELS: The Evolution of the Galaxy Size-Mass Distribution since z = 3

    NASA Astrophysics Data System (ADS)

    van der Wel, A.; Franx, M.; van Dokkum, P. G.; Skelton, R. E.; Momcheva, I. G.; Whitaker, K. E.; Brammer, G. B.; Bell, E. F.; Rix, H.-W.; Wuyts, S.; Ferguson, H. C.; Holden, B. P.; Barro, G.; Koekemoer, A. M.; Chang, Yu-Yen; McGrath, E. J.; Häussler, B.; Dekel, A.; Behroozi, P.; Fumagalli, M.; Leja, J.; Lundgren, B. F.; Maseda, M. V.; Nelson, E. J.; Wake, D. A.; Patel, S. G.; Labbé, I.; Faber, S. M.; Grogin, N. A.; Kocevski, D. D.

    2014-06-01

    Spectroscopic+photometric redshifts, stellar mass estimates, and rest-frame colors from the 3D-HST survey are combined with structural parameter measurements from CANDELS imaging to determine the galaxy size-mass distribution over the redshift range 0 < z < 3. Separating early- and late-type galaxies on the basis of star-formation activity, we confirm that early-type galaxies are on average smaller than late-type galaxies at all redshifts, and we find a significantly different rate of average size evolution at fixed galaxy mass, with fast evolution for the early-type population, R effvprop(1 + z)-1.48, and moderate evolution for the late-type population, R effvprop(1 + z)-0.75. The large sample size and dynamic range in both galaxy mass and redshift, in combination with the high fidelity of our measurements due to the extensive use of spectroscopic data, not only fortify previous results but also enable us to probe beyond simple average galaxy size measurements. At all redshifts the slope of the size-mass relation is shallow, R_{eff}\\propto M_*^{0.22}, for late-type galaxies with stellar mass >3 × 109 M ⊙, and steep, R_{eff}\\propto M_*^{0.75}, for early-type galaxies with stellar mass >2 × 1010 M ⊙. The intrinsic scatter is lsim0.2 dex for all galaxy types and redshifts. For late-type galaxies, the logarithmic size distribution is not symmetric but is skewed toward small sizes: at all redshifts and masses, a tail of small late-type galaxies exists that overlaps in size with the early-type galaxy population. The number density of massive (~1011 M ⊙), compact (R eff < 2 kpc) early-type galaxies increases from z = 3 to z = 1.5-2 and then strongly decreases at later cosmic times.

  3. Effects of microgravityon the structural organization of Brassica rapa photosynthetic appartus

    NASA Astrophysics Data System (ADS)

    Adamchuk, N.; Kordyum, E.; Guikema, J.

    Leaf mesophyll cells of 13- and 15-day old Brassica rapa plants grown on board the space shuttle Columbia (STS-87) and in the ground control have been investigated using the methods of light and electron microscopy. 13-day old plants were fixed on orbit and 15-day old plants were fixed after landing. It was shown the essential differences in leaf mesophyll quantitative anatomical and ultrastructural characteristics between spaceflight and ground control variants. Both the volume of palisade parenchyma cells and a number of chloroplasts in those cells increased in spaceflight samples. Simultaneusly, a chloroplast size decreased together with increasing of a relative volume of stromal thylakoids, starch grains and plastoglobuli. It was also noted increasing of stromal thylakoid length. In the same time, both a total length of thylakoids in granae and the grana number diminished in space flight. In addition, the interthylakoid space could be expended and the thylakoid length was more variable in chloroplast granae on microgravity, that correlated with a shrinkage of thylakoids in granal stacks. The obtained data a er discussed with the questions on both the photosynthetic apparatus sensitivity to gravity and its adaptive possibility to microgravity.

  4. SRF test facility for the superconducting LINAC ``RAON'' — RRR property and e-beam welding

    NASA Astrophysics Data System (ADS)

    Jung, Yoochul; Hyun, Myungook; Joo, Jongdae; Joung, Mijoung

    2015-02-01

    Equipment, such as a vacuum furnace, high pressure rinse (HPR), eddy current test (ECT) and buffered chemical polishing (BCP), are installed in the superconducting radio frequency (SRF) test facility. Three different sizes of cryostats (diameters of 600 mm for a quarter wave resonator (QWR), 900 mm for a half wave resonator (HWR), and 1200 mm for single spoke resonator 1&2 (SSR 1&2)) for vertical RF tests are installed for testing cavities. We confirmed that as-received niobium sheets (ASTM B393, RRR300) good electrical properties because they showed average residual resistance ratio (RRR) values higher than 300. However, serious RRR degradation occurred after joining two pieces of Nb by e-beam welding because the average RRR values of the samples were ˜179, which was only ˜60% of as-received RRR value. From various e-beam welding experiments in which the welding current and a speed at a fixed welding voltage were changed, we confirmed that good welding results were obtained at a 53 mA welding current and a 20-mm/s welding speed at a fixed welding voltage of 150 kV.

  5. Optimized manual and automated recovery of amplifiable DNA from tissues preserved in buffered formalin and alcohol-based fixative.

    PubMed

    Duval, Kristin; Aubin, Rémy A; Elliott, James; Gorn-Hondermann, Ivan; Birnboim, H Chaim; Jonker, Derek; Fourney, Ron M; Frégeau, Chantal J

    2010-02-01

    Archival tissue preserved in fixative constitutes an invaluable resource for histological examination, molecular diagnostic procedures and for DNA typing analysis in forensic investigations. However, available material is often limited in size and quantity. Moreover, recovery of DNA is often severely compromised by the presence of covalent DNA-protein cross-links generated by formalin, the most prevalent fixative. We describe the evaluation of buffer formulations, sample lysis regimens and DNA recovery strategies and define optimized manual and automated procedures for the extraction of high quality DNA suitable for molecular diagnostics and genotyping. Using a 3-step enzymatic digestion protocol carried out in the absence of dithiothreitol, we demonstrate that DNA can be efficiently released from cells or tissues preserved in buffered formalin or the alcohol-based fixative GenoFix. This preparatory procedure can then be integrated to traditional phenol/chloroform extraction, a modified manual DNA IQ or automated DNA IQ/Te-Shake-based extraction in order to recover DNA for downstream applications. Quantitative recovery of high quality DNA was best achieved from specimens archived in GenoFix and extracted using magnetic bead capture.

  6. The allele-frequency spectrum in a decoupled Moran model with mutation, drift, and directional selection, assuming small mutation rates.

    PubMed

    Vogl, Claus; Clemente, Florian

    2012-05-01

    We analyze a decoupled Moran model with haploid population size N, a biallelic locus under mutation and drift with scaled forward and backward mutation rates θ(1)=μ(1)N and θ(0)=μ(0)N, and directional selection with scaled strength γ=sN. With small scaled mutation rates θ(0) and θ(1), which is appropriate for single nucleotide polymorphism data in highly recombining regions, we derive a simple approximate equilibrium distribution for polymorphic alleles with a constant of proportionality. We also put forth an even simpler model, where all mutations originate from monomorphic states. Using this model we derive the sojourn times, conditional on the ancestral and fixed allele, and under equilibrium the distributions of fixed and polymorphic alleles and fixation rates. Furthermore, we also derive the distribution of small samples in the diffusion limit and provide convenient recurrence relations for calculating this distribution. This enables us to give formulas analogous to the Ewens-Watterson estimator of θ for biased mutation rates and selection. We apply this theory to a polymorphism dataset of fourfold degenerate sites in Drosophila melanogaster. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Adsorption of Methyl Tertiary Butyl Ether on Granular Zeolites: Batch and Column Studies

    PubMed Central

    Abu-Lail, Laila; Bergendahl, John A.; Thompson, Robert W.

    2010-01-01

    Methyl tertiary butyl ether (MTBE) has been shown to be readily removed from water with powdered zeolites, but the passage of water through fixed beds of very small powdered zeolites produces high friction losses not encountered in flow through larger sized granular materials. In this study, equilibrium and kinetic adsorption of MTBE onto granular zeolites, a coconut shell granular activated carbon (CS-1240), and a commercial carbon adsorbent (CCA) sample was evaluated. In addition, the effect of natural organic matter (NOM) on MTBE adsorption was evaluated. Batch adsorption experiments determined that ZSM-5 was the most effective granular zeolite for MTBE adsorption. Further equilibrium and kinetic experiments verified that granular ZSM-5 is superior to CS-1240 and CCA in removing MTBE from water. No competitive-adsorption effects between NOM and MTBE were observed for adsorption to granular ZSM-5 or CS-1240, however there was competition between NOM and MTBE for adsorption onto the CCA granules. Fixed-bed adsorption experiments for longer run times were performed using granular ZSM-5. The bed depth service time model (BDST) was used to analyze the breakthrough data. PMID:20153106

  8. Growth fraction in non-small cell lung cancer estimated by proliferating cell nuclear antigen and comparison with Ki-67 labeling and DNA flow cytometry data.

    PubMed Central

    Fontanini, G.; Pingitore, R.; Bigini, D.; Vignati, S.; Pepe, S.; Ruggiero, A.; Macchiarini, P.

    1992-01-01

    Results generated by the immunohistochemical staining with PC10, a new monoclonal antibody recognizing PCNA (a nuclear protein associated with cell proliferation) in formalin-fixed and paraffin-embedded tissue were compared with those of Ki-67 labeling and DNA flow cytometry in 47 consecutive non-small cell lung cancer (NSCLC). PCNA reactivity was observed in all samples and confined to the nuclei of cancer cells. Its frequency ranged from 0 to 80% (37.7 +/- 23.6) and larger sized, early-staged and DNA aneuploid tumors expressed a significant higher number of PCNA-reactive cells. The PCNA and Ki-67 labeling rates were closely correlated (r = 0.383, P = 0.009). By flow cytometry, we observed a good correlation among PCNA labeling and S-phase fraction (r = 0.422, P = .0093) and G1 phase (r = 0.303, P = .051) of the cell cycle. Results indicate that PCNA labeling with PC10 is a simple method for assessing the proliferative activity in formalin-fixed, paraffin-embedded tissue of NSCLC and correlates well with Ki-67 labeling and S-phase fraction of the cell cycle. Images Figure 2 PMID:1361306

  9. Analysis of down wood volme and percent ground cover for the Missouri Ozark forest ecosystem project

    Treesearch

    Laura A. Herbeck

    2000-01-01

    Volume and percent ground cover of down wood were estimated on the MOFEP sites from two separate sampling inventories, line transects and fixed-area plots. Line transects were used to sample down wood in the 1990-91 and 1994-95 inventories and fixed-area plots were used in an additional inventory in 1995. Line transect inventories estimated a range in ground cover...

  10. The effect of Au amount on size uniformity of self-assembled Au nanoparticles

    NASA Astrophysics Data System (ADS)

    Chen, S.-H.; Wang, D.-C.; Chen, G.-Y.; Chen, K.-Y.

    2008-03-01

    The self-assembled fabrication of nanostructure, a dreaming approach in the area of fabrication engineering, is the ultimate goal of this research. A finding was proved through previous research that the size of the self-assembled gold nanoparticles could be controlled with the mole ratio between AuCl4- and thiol. In this study, the moles of Au were fixed, only the moles of thiol were adjusted. Five different mole ratios of Au/S with their effect on size uniformity were investigated. The mole ratios were 1:1/16, 1:1/8, 1:1, 1:8, 1:16, respectively. The size distributions of the gold nanoparticles were analyzed by Mac-View analysis software. HR-TEM was used to derive images of self-assembled gold nanoparticles. The result reached was also the higher the mole ratio between AuCl4- and thiol the bigger the self-assembled gold nanoparticles. Under the condition of moles of Au fixed, the most homogeneous nanoparticles in size distribution derived with the mole ratio of 1:1/8 between AuCl4- and thiol. The obtained nanoparticles could be used, for example, in uniform surface nanofabrication, leading to the fabrication of ordered array of quantum dots.

  11. Ultra-wideband radar motion sensor

    DOEpatents

    McEwan, Thomas E.

    1994-01-01

    A motion sensor is based on ultra-wideband (UWB) radar. UWB radar range is determined by a pulse-echo interval. For motion detection, the sensors operate by staring at a fixed range and then sensing any change in the averaged radar reflectivity at that range. A sampling gate is opened at a fixed delay after the emission of a transmit pulse. The resultant sampling gate output is averaged over repeated pulses. Changes in the averaged sampling gate output represent changes in the radar reflectivity at a particular range, and thus motion.

  12. Ultra-wideband radar motion sensor

    DOEpatents

    McEwan, T.E.

    1994-11-01

    A motion sensor is based on ultra-wideband (UWB) radar. UWB radar range is determined by a pulse-echo interval. For motion detection, the sensors operate by staring at a fixed range and then sensing any change in the averaged radar reflectivity at that range. A sampling gate is opened at a fixed delay after the emission of a transmit pulse. The resultant sampling gate output is averaged over repeated pulses. Changes in the averaged sampling gate output represent changes in the radar reflectivity at a particular range, and thus motion. 15 figs.

  13. Lightweight GPS-tags, one giant leap for wildlife tracking? An assessment approach.

    PubMed

    Recio, Mariano R; Mathieu, Renaud; Denys, Paul; Sirguey, Pascal; Seddon, Philip J

    2011-01-01

    Recent technological improvements have made possible the development of lightweight GPS-tagging devices suitable to track medium-to-small sized animals. However, current inferences concerning GPS performance are based on heavier designs, suitable only for large mammals. Lightweight GPS-units are deployed close to the ground, on species selecting micro-topographical features and with different behavioural patterns in comparison to larger mammal species. We assessed the effects of vegetation, topography, motion, and behaviour on the fix success rate for lightweight GPS-collar across a range of natural environments, and at the scale of perception of feral cats (Felis catus). Units deployed at 20 cm above the ground in sites of varied vegetation and topography showed that trees (native forest) and shrub cover had the largest influence on fix success rate (89% on average); whereas tree cover, sky availability, number of satellites and horizontal dilution of position (HDOP) were the main variables affecting location error (±39.5 m and ±27.6 m before and after filtering outlier fixes). Tests on HDOP or number of satellites-based screening methods to remove inaccurate locations achieved only a small reduction of error and discarded many accurate locations. Mobility tests were used to simulate cats' motion, revealing a slightly lower performance as compared to the fixed sites. GPS-collars deployed on 43 cats showed no difference in fix success rate by sex or season. Overall, fix success rate and location error values were within the range of previous tests carried out with collars designed for larger species. Lightweight GPS-tags are a suitable method to track medium to small size species, hence increasing the range of opportunities for spatial ecology research. However, the effects of vegetation, topography and behaviour on location error and fix success rate need to be evaluated prior to deployment, for the particular study species and their habitats.

  14. Design of a multi-arm randomized clinical trial with no control arm.

    PubMed

    Magaret, Amalia; Angus, Derek C; Adhikari, Neill K J; Banura, Patrick; Kissoon, Niranjan; Lawler, James V; Jacob, Shevin T

    2016-01-01

    Clinical trial designs that include multiple treatments are currently limited to those that perform pairwise comparisons of each investigational treatment to a single control. However, there are settings, such as the recent Ebola outbreak, in which no treatment has been demonstrated to be effective; and therefore, no standard of care exists which would serve as an appropriate control. For illustrative purposes, we focused on the care of patients presenting in austere settings with critically ill 'sepsis-like' syndromes. Our approach involves a novel algorithm for comparing mortality among arms without requiring a single fixed control. The algorithm allows poorly-performing arms to be dropped during interim analyses. Consequently, the study may be completed earlier than planned. We used simulation to determine operating characteristics for the trial and to estimate the required sample size. We present a potential study design targeting a minimal effect size of a 23% relative reduction in mortality between any pair of arms. Using estimated power and spurious significance rates from the simulated scenarios, we show that such a trial would require 2550 participants. Over a range of scenarios, our study has 80 to 99% power to select the optimal treatment. Using a fixed control design, if the control arm is least efficacious, 640 subjects would be enrolled into the least efficacious arm, while our algorithm would enroll between 170 and 430. This simulation method can be easily extended to other settings or other binary outcomes. Early dropping of arms is efficient and ethical when conducting clinical trials with multiple arms. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    NASA Astrophysics Data System (ADS)

    Saha, Debasish; Kemanian, Armen R.; Rau, Benjamin M.; Adler, Paul R.; Montes, Felipe

    2017-04-01

    Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (corn-soybean rotation), College Station, TX (corn-vetch rotation), Fort Collins, CO (irrigated corn), and Pullman, WA (winter wheat), representing diverse agro-ecoregions of the United States. Fertilization source, rate, and timing were site-specific. These simulated fluxes surrogated daily measurements in the analysis. We ;sampled; the fluxes using a fixed interval (1-32 days) or a rule-based (decision tree-based) sampling method. Two types of decision trees were built: a high-input tree (HI) that included soil inorganic nitrogen (SIN) as a predictor variable, and a low-input tree (LI) that excluded SIN. Other predictor variables were identified with Random Forest. The decision trees were inverted to be used as rules for sampling a representative number of members from each terminal node. The uncertainty of the annual N2O flux estimation increased along with the fixed interval length. A 4- and 8-day fixed sampling interval was required at College Station and Ames, respectively, to yield ±20% accuracy in the flux estimate; a 12-day interval rendered the same accuracy at Fort Collins and Pullman. Both the HI and the LI rule-based methods provided the same accuracy as that of fixed interval method with up to a 60% reduction in sampling events, particularly at locations with greater temporal flux variability. For instance, at Ames, the HI rule-based and the fixed interval methods required 16 and 91 sampling events, respectively, to achieve the same absolute bias of 0.2 kg N ha-1 yr-1 in estimating cumulative N2O flux. These results suggest that using simulation models along with decision trees can reduce the cost and improve the accuracy of the estimations of cumulative N2O fluxes using the discrete chamber-based method.

  16. [Evaluation of 3 methods of DNA extraction from paraffin-embedded material for the amplification of genomic DNA using PCR].

    PubMed

    Mesquita, R A; Anzai, E K; Oliveira, R N; Nunes, F D

    2001-01-01

    There are several protocols reported in the literature for the extraction of genomic DNA from formalin-fixed paraffin-embedded samples. Genomic DNA is utilized in molecular analyses, including PCR. This study compares three different methods for the extraction of genomic DNA from formalin-fixed paraffin-embedded (inflammatory fibrous hyperplasia) and non-formalin-fixed (normal oral mucosa) samples: phenol with enzymatic digestion, and silica with and without enzymatic digestion. The amplification of DNA by means of the PCR technique was carried out with primers for the exon 7 of human keratin type 14. Amplicons were analyzed by means of electrophoresis in an 8% polyacrylamide gel with 5% glycerol, followed by silver-staining visualization. The phenol/enzymatic digestion and the silica/enzymatic digestion methods provided amplicons from both tissue samples. The method described is a potential aid in the establishment of the histopathologic diagnosis and in retrospective studies with archival paraffin-embedded samples.

  17. Comprehensive Analysis of Immunological Synapse Phenotypes Using Supported Lipid Bilayers.

    PubMed

    Valvo, Salvatore; Mayya, Viveka; Seraia, Elena; Afrose, Jehan; Novak-Kotzer, Hila; Ebner, Daniel; Dustin, Michael L

    2017-01-01

    Supported lipid bilayers (SLB) formed on glass substrates have been a useful tool for study of immune cell signaling since the early 1980s. The mobility of lipid-anchored proteins in the system, first described for antibodies binding to synthetic phospholipid head groups, allows for the measurement of two-dimensional binding reactions and signaling processes in a single imaging plane over time or for fixed samples. The fragility of SLB and the challenges of building and validating individual substrates limit most experimenters to ~10 samples per day, perhaps increasing this few-fold when examining fixed samples. Successful experiments might then require further days to fully analyze. We present methods for automation of many steps in SLB formation, imaging in 96-well glass bottom plates, and analysis that enables >100-fold increase in throughput for fixed samples and wide-field fluorescence. This increased throughput will allow better coverage of relevant parameters and more comprehensive analysis of aspects of the immunological synapse that are well reconstituted by SLB.

  18. Grain size effect on Lcr elastic wave for surface stress measurement of carbon steel

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Miao, Wenbing; Dong, Shiyun; He, Peng

    2018-04-01

    Based on critical refraction longitudinal wave (Lcr wave) acoustoelastic theory, correction method for grain size effect on surface stress measurement was discussed in this paper. Two fixed distance Lcr wave transducers were used to collect Lcr wave, and difference in time of flight between Lcr waves was calculated with cross-correlation coefficient function, at last relationship of Lcr wave acoustoelastic coefficient and grain size was obtained. Results show that as grain size increases, propagation velocity of Lcr wave decreases, one cycle is optimal step length for calculating difference in time of flight between Lcr wave. When stress value is within stress turning point, relationship of difference in time of flight between Lcr wave and stress is basically consistent with Lcr wave acoustoelastic theory, while there is a deviation and it is higher gradually as stress increasing. Inhomogeneous elastic plastic deformation because of inhomogeneous microstructure and average value of surface stress in a fixed distance measured with Lcr wave were considered as the two main reasons for above results. As grain size increasing, Lcr wave acoustoelastic coefficient decreases in the form of power function, then correction method for grain size effect on surface stress measurement was proposed. Finally, theoretical discussion was verified by fracture morphology observation.

  19. Gardnerella vaginalis and Lactobacillus sp in liquid-based cervical samples in healthy and disturbed vaginal flora using cultivation-independent methods.

    PubMed

    Klomp, Johanna M; Verbruggen, Banut-Sabine M; Korporaal, Hans; Boon, Mathilde E; de Jong, Pauline; Kramer, Gerco C; van Haaften, Maarten; Heintz, A Peter M

    2008-05-01

    Our objective was to determine the morphotype of the adherent bacteria in liquid-based cytology (LBC) in smears with healthy and disturbed vaginal flora. And to use PCR technology on the same fixed cell sample to establish DNA patterns of the 16S RNA genes of the bacteria in the sample. Thirty samples were randomly selected from a large group of cervical cell samples suspended in a commercial coagulant fixative "(BoonFix)." PCR was used to amplify DNA of five bacterial species: Lactobacillus acidophilus, Lactobacillus crispatus, Lactobacillus jensenii, Gardnerella vaginalis, and Mycoplasma hominis. The LBC slides were then analyzed by light microscopy to estimate bacterial adhesion. DNA of lactobacilli was detected in all cell samples. Seventeen smears showed colonization with Gardnerella vaginalis (range 2.6 x 10(2)-3.0 x 10(5) bacteria/mul BoonFix sample). Two cases were identified as dysbacteriotic with high DNA values for Gardnerella vaginalis and low values for Lactobacillus crispatus. The sample with the highest concentration for Gardnerella vaginalis showed an unequivocal Gardnerella infection. This study indicates that the adherence pattern of a disturbed flora in liquid-based cervical samples can be identified unequivocally, and that these samples are suitable for quantitative PCR analysis. This cultivation independent method reveals a strong inverse relationship between Gardnerella vaginalis and Lactobacillus crispatus in dysbacteriosis and unequivocal Gardnerella infection.

  20. Image reconstructions from super-sampled data sets with resolution modeling in PET imaging.

    PubMed

    Li, Yusheng; Matej, Samuel; Metzler, Scott D

    2014-12-01

    Spatial resolution in positron emission tomography (PET) is still a limiting factor in many imaging applications. To improve the spatial resolution for an existing scanner with fixed crystal sizes, mechanical movements such as scanner wobbling and object shifting have been considered for PET systems. Multiple acquisitions from different positions can provide complementary information and increased spatial sampling. The objective of this paper is to explore an efficient and useful reconstruction framework to reconstruct super-resolution images from super-sampled low-resolution data sets. The authors introduce a super-sampling data acquisition model based on the physical processes with tomographic, downsampling, and shifting matrices as its building blocks. Based on the model, we extend the MLEM and Landweber algorithms to reconstruct images from super-sampled data sets. The authors also derive a backprojection-filtration-like (BPF-like) method for the super-sampling reconstruction. Furthermore, they explore variant methods for super-sampling reconstructions: the separate super-sampling resolution-modeling reconstruction and the reconstruction without downsampling to further improve image quality at the cost of more computation. The authors use simulated reconstruction of a resolution phantom to evaluate the three types of algorithms with different super-samplings at different count levels. Contrast recovery coefficient (CRC) versus background variability, as an image-quality metric, is calculated at each iteration for all reconstructions. The authors observe that all three algorithms can significantly and consistently achieve increased CRCs at fixed background variability and reduce background artifacts with super-sampled data sets at the same count levels. For the same super-sampled data sets, the MLEM method achieves better image quality than the Landweber method, which in turn achieves better image quality than the BPF-like method. The authors also demonstrate that the reconstructions from super-sampled data sets using a fine system matrix yield improved image quality compared to the reconstructions using a coarse system matrix. Super-sampling reconstructions with different count levels showed that the more spatial-resolution improvement can be obtained with higher count at a larger iteration number. The authors developed a super-sampling reconstruction framework that can reconstruct super-resolution images using the super-sampling data sets simultaneously with known acquisition motion. The super-sampling PET acquisition using the proposed algorithms provides an effective and economic way to improve image quality for PET imaging, which has an important implication in preclinical and clinical region-of-interest PET imaging applications.

  1. Sediment mobility and bedload transport rates in a high-elevation glacier-fed stream (Saldur river, Eastern Italian Alps)

    NASA Astrophysics Data System (ADS)

    Dell'Agnese, A.; Mao, L.; Comiti, F.

    2012-04-01

    The assessment of bedload transport in high-gradient streams is necessary to evaluate and mitigate flood hazards and to understand morphological processes taking place in the whole river network. Bedload transport in steep channels is particularly difficult to predict due to the complex and varying types of flow resistance, the very coarse and heterogeneous sediments, and the activity and connections of sediment sources at the basin scale. Yet, bedload measurements in these environments are still relatively scarce, and long-term monitoring programs are highly valuable to explore spatial and temporal variability of bedload processes. Even fewer are investigations conducted in high-elevation glaciarized basins, despite their relevance in many regions worldwide. The poster will present bedload transport measurements in a newly established (spring 2011) monitoring station in the Saldur basin (Eastern Italian Alps), which presents a 3.3 km2 glacier in its upper part. At 2100 m a.s.l. (20 km2 drainage area), a pressure transducer measures flow stage and bedload transport is monitored continuously by means of a hydrophone (a cylindrical steel pipe with microphones registering particle collisions) and by 4 fixed antennas for tracing clasts equipped with PITs (Passive Integrated Transponders). At the same location bedload samples are collected by using both a "Bunte" bedload trap and a "Helley-Smith" sampler at 5 positions along a 5 m wide cross-section. Bedload was measured from June to August 2011 during daily discharge fluctuations due to snow- and ice- melt flows. Samples were taken at a large range of discharges (1.1 to 4.6 m3 s-1) and bedload rates (0.01 to 700 g s-1 m-1). As expected, samples taken using the two samplers are not directly comparable even if taken virtually at the same time and at the same location across the section. Results indicate that the grain size of the transported material increases with the shear stress acting on the channel bed and with the bedload transport rate. The coarsest particles collected reached the median diameter of the bed surface (around 100 mm), and exponent of the relationship between the dimensionless critical shear stress and the relative transported size is about -0.80. This indicates that size-selective mobility conditions dominate within the range of explored discharges, and this evidence is confirmed by the analysis of the fractional transport rates of the collected sediment samples. The mobility of coarser (from 50 to 500 mm) sediment particles was explored using 360 PITs; the passage of 176 of them (from 50 to 250 mm in size) have been recorded by the fixed antennas. However, clasts up to about the D84 of the bed surface were seen mobilized after the larger snow/ice melt flows, but relevant morphological changes were observed only after a rainfall flood (favored by a preceding high ice-melt flow) featuring a peak discharge of about 14 m3s-1 (above bankfull stage). A preliminary analysis of PITs data shows a lesser degree of transport selectivity, suggesting that at medium to high flow rates sediments are transported at conditions closer to equal-mobility.

  2. Reducing Class Size: What Do We Know?

    ERIC Educational Resources Information Center

    Bascia, Nina

    2010-01-01

    This report provides an overview of findings from the research on primary class size reduction as a strategy to improve student learning. Its purpose is to provide a comprehensive and balanced picture of a very popular educational reform strategy that has often been seen as a "quick fix" for improving students' opportunities to learn in…

  3. 40 CFR 113.4 - Size classes and associated liability limits for fixed onshore oil storage facilities, 1,000...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Size classes and associated liability... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS LIABILITY LIMITS FOR... privity and knowledge of the owner or operator, the following limits of liability are established for...

  4. Average size of random polygons with fixed knot topology.

    PubMed

    Matsuda, Hiroshi; Yao, Akihisa; Tsukahara, Hiroshi; Deguchi, Tetsuo; Furuta, Ko; Inami, Takeo

    2003-07-01

    We have evaluated by numerical simulation the average size R(K) of random polygons of fixed knot topology K=,3(1),3(1) musical sharp 4(1), and we have confirmed the scaling law R(2)(K) approximately N(2nu(K)) for the number N of polygonal nodes in a wide range; N=100-2200. The best fit gives 2nu(K) approximately 1.11-1.16 with good fitting curves in the whole range of N. The estimate of 2nu(K) is consistent with the exponent of self-avoiding polygons. In a limited range of N (N greater, similar 600), however, we have another fit with 2nu(K) approximately 1.01-1.07, which is close to the exponent of random polygons.

  5. Combinatoric analysis of heterogeneous stochastic self-assembly.

    PubMed

    D'Orsogna, Maria R; Zhao, Bingyu; Berenji, Bijan; Chou, Tom

    2013-09-28

    We analyze a fully stochastic model of heterogeneous nucleation and self-assembly in a closed system with a fixed total particle number M, and a fixed number of seeds Ns. Each seed can bind a maximum of N particles. A discrete master equation for the probability distribution of the cluster sizes is derived and the corresponding cluster concentrations are found using kinetic Monte-Carlo simulations in terms of the density of seeds, the total mass, and the maximum cluster size. In the limit of slow detachment, we also find new analytic expressions and recursion relations for the cluster densities at intermediate times and at equilibrium. Our analytic and numerical findings are compared with those obtained from classical mass-action equations and the discrepancies between the two approaches analyzed.

  6. Measurement system

    NASA Technical Reports Server (NTRS)

    Turner, J. W. (Inventor)

    1973-01-01

    A measurement system is described for providing an indication of a varying physical quantity represented by or converted to a variable frequency signal. Timing pulses are obtained marking the duration of a fixed number, or set, of cycles of the sampled signal and these timing pulses are employed to control the period of counting of cycles of a higher fixed and known frequency source. The counts of cycles obtained from the fixed frequency source provide a precise measurement of the average frequency of each set of cycles sampled, and thus successive discrete values of the quantity being measured. The frequency of the known frequency source is made such that each measurement is presented as a direct digital representation of the quantity measured.

  7. libFLASM: a software library for fixed-length approximate string matching.

    PubMed

    Ayad, Lorraine A K; Pissis, Solon P P; Retha, Ahmad

    2016-11-10

    Approximate string matching is the problem of finding all factors of a given text that are at a distance at most k from a given pattern. Fixed-length approximate string matching is the problem of finding all factors of a text of length n that are at a distance at most k from any factor of length ℓ of a pattern of length m. There exist bit-vector techniques to solve the fixed-length approximate string matching problem in time [Formula: see text] and space [Formula: see text] under the edit and Hamming distance models, where w is the size of the computer word; as such these techniques are independent of the distance threshold k or the alphabet size. Fixed-length approximate string matching is a generalisation of approximate string matching and, hence, has numerous direct applications in computational molecular biology and elsewhere. We present and make available libFLASM, a free open-source C++ software library for solving fixed-length approximate string matching under both the edit and the Hamming distance models. Moreover we describe how fixed-length approximate string matching is applied to solve real problems by incorporating libFLASM into established applications for multiple circular sequence alignment as well as single and structured motif extraction. Specifically, we describe how it can be used to improve the accuracy of multiple circular sequence alignment in terms of the inferred likelihood-based phylogenies; and we also describe how it is used to efficiently find motifs in molecular sequences representing regulatory or functional regions. The comparison of the performance of the library to other algorithms show how it is competitive, especially with increasing distance thresholds. Fixed-length approximate string matching is a generalisation of the classic approximate string matching problem. We present libFLASM, a free open-source C++ software library for solving fixed-length approximate string matching. The extensive experimental results presented here suggest that other applications could benefit from using libFLASM, and thus further maintenance and development of libFLASM is desirable.

  8. Fixed-time Insemination in Pasture-based Medium-sized Dairy Operations of Northern Germany and an Attempt to Replace GnRH by hCG.

    PubMed

    Marthold, D; Detterer, J; Koenig von Borstel, U; Gauly, M; Holtz, W

    2016-02-01

    A field study was conducted aimed at (i) evaluating the practicability of a fixed-time insemination regime for medium-sized dairy operations of north-western Germany, representative for many regions of Central Europe and (ii) substituting hCG for GnRH as ovulation-inducing agent at the end of a presynch or ovsynch protocol in an attempt to reduce the incidence of premature luteal regression. Cows of two herds synchronized by presynch and two herds synchronized by ovsynch protocol were randomly allotted to three subgroups; in one group ovulation was induced by the GnRH analog buserelin, in another by hCG, whereas a third group remained untreated. The synchronized groups were fixed-time inseminated; the untreated group bred to observed oestrus. Relative to untreated herd mates, pregnancy rate in cows subjected to a presynch protocol with buserelin as ovulation-inducing agent was 74%; for hCG it was 60%. In cows subjected to an ovsynch protocol, the corresponding relative pregnancy rates reached 138% in the case of buserelin and 95% in the case of hCG. Average service interval was shortened by 1 week in the presynch and delayed by 2 weeks in the ovsynch group. It may be concluded that fixed-time insemination of cows synchronized via ovsynch protocol with buserelin as ovulation-inducing agent is practicable and may help improve efficiency and reduce the work load involved with herd management in medium-sized dairy operations. The substitution of hCG for buserelin was found to be not advisable. © 2015 Blackwell Verlag GmbH.

  9. Miniaturized double latching solenoid valve

    NASA Technical Reports Server (NTRS)

    Smith, James T. (Inventor)

    2010-01-01

    A valve includes a generally elongate pintle; a spacer having a rounded surface that bears against the pintle; a bulbous tip fixed to the spacer; and a hollow, generally cylindrical collar fixed to the pintle, the collar enclosing the spacer and the tip and including an opening through which a portion of the tip extends, the opening in the collar and interior of the collar being of a size such that the tip floats therein.

  10. Variant calling in low-coverage whole genome sequencing of a Native American population sample.

    PubMed

    Bizon, Chris; Spiegel, Michael; Chasse, Scott A; Gizer, Ian R; Li, Yun; Malc, Ewa P; Mieczkowski, Piotr A; Sailsbery, Josh K; Wang, Xiaoshu; Ehlers, Cindy L; Wilhelmsen, Kirk C

    2014-01-30

    The reduction in the cost of sequencing a human genome has led to the use of genotype sampling strategies in order to impute and infer the presence of sequence variants that can then be tested for associations with traits of interest. Low-coverage Whole Genome Sequencing (WGS) is a sampling strategy that overcomes some of the deficiencies seen in fixed content SNP array studies. Linkage-disequilibrium (LD) aware variant callers, such as the program Thunder, may provide a calling rate and accuracy that makes a low-coverage sequencing strategy viable. We examined the performance of an LD-aware variant calling strategy in a population of 708 low-coverage whole genome sequences from a community sample of Native Americans. We assessed variant calling through a comparison of the sequencing results to genotypes measured in 641 of the same subjects using a fixed content first generation exome array. The comparison was made using the variant calling routines GATK Unified Genotyper program and the LD-aware variant caller Thunder. Thunder was found to improve concordance in a coverage dependent fashion, while correctly calling nearly all of the common variants as well as a high percentage of the rare variants present in the sample. Low-coverage WGS is a strategy that appears to collect genetic information intermediate in scope between fixed content genotyping arrays and deep-coverage WGS. Our data suggests that low-coverage WGS is a viable strategy with a greater chance of discovering novel variants and associations than fixed content arrays for large sample association analyses.

  11. Robust model selection and the statistical classification of languages

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating a model which represent the main law for each language. Our findings agree with the linguistic conjecture, related to the rhythm of the languages included on our dataset.

  12. [Surface aspect of fixed restaurations and parodontal influences].

    PubMed

    Ciocan-Pendefunda, Alice-Arina; Forna, Norina Consuela

    2012-01-01

    Any new class of materials requires a new cutting technology which, unless complied with properly, may negatively impact on the advantages and performance of the material. The modifications appeared as a result of the technological processes in the structure or surface aspect of the materials not only affects the mechanical resistance of the restorations but also casts doubts on their biological qualities. This study evaluates the impact of biomaterials involved in fixed restorations on the periodontal architecture, bearing extremely important connotations in the long run. The "in vitro" testing was conducted on culture cells for the cytotoxic effect of certain restorative materials--metallic alloys used in prosthetic restorations, composite materials, in collaboration with the Virology Laboratory of the Public Health Institute.The tested materials were metallic alloys, composite materials and acrylic resins used for the construction of standard sized plates (out of each material) in order to avoid the differences that might arise from the technological process. Artificial saliva processed to reach a pH = 7 was prepared in the Biophysics Laboratory of UMF Iasi. Material samples and the saliva inoculated with these were tested. -p. The cytotoxic effect of the tested materials on the celular cultures takes on extremely diverse forms, from discrete morphological modifications of the cells with regard to the size, shape, internal structure (for the noble and semi-noble alloys) up to the partial stripping-off of the celular film, the modification of density and coloration. In the case of the witness of non-inoculated culture, the testing results showed the presence of a continuous film, with cells having the same size, transparency and colouring, with an unaltered polyhedral contour, with visible nuclei, an image also kept in the case of the saliva witness. The involvement of restaurative materials in triggering, maintaining and aggravating a periodontal pathology indicates the capital role played by the dentist in the identications of lesions during measures by avoiding or excluding etiological agents before an obvious lesions occurs in the process of active dispensarization.

  13. The Leaching of Aluminium In Spanish Clays, Coal Mining Wastes and Coal Fly Ashes by Sulphuric Acid.

    NASA Astrophysics Data System (ADS)

    Fernández, A. M.; Ibáñez, J. L.; Llavona, M. A.; Zapico, R.

    The acid leaching of aluminium from several non traditional ores, bayerite, kaolinite, different clays, coal mining wastes and coal fly ashes, and the kinetic of their dissolution are described. The effects of time, temperature, acid concentration, sample calcination, particle size were examined. The leaching of aluminium is dependent on acid concentration and strongly on temperature. Generally, the time to reach a fixed percentage of dissolution decreases with increasing acid concentration in the range 6% to 40% acid by weight. On clays and coal mining wastes a good relation between Al removal and ratio kaolinite/illite was also observed at all temperatures and acid concentration tested. Coal fly ashes are particles that were heated at very high temperatures in the power station and Al compounds were transformed into mullite and so Al recovery was minor. Several rate equations describing the kinetics of the leach reaction were discussed and Kinetic parameters and activation energy values of samples are presented.

  14. [Preparation of titanium dioxide particles and properties for flue gas desulfurization].

    PubMed

    Luo, Yonggang; Li, Daji; Huang, Zhen

    2003-01-01

    Under different sintering temperatures(340 degrees C, 440 degrees C, 540 degrees C, 640 degrees C), four TiO2 particles were prepared. The crystal types of all four samples were found to possess anatase structures by XRD. It was obtained by N2 experimental adsorption at low temperature (77K) that their surface areas and average pore size were between 79 and 124 m2/g, 56.8 and 254.8 A respectively. The pore structure of TiO2 particles was characterized by scanning electron microscope (SEM). The tests of adsorption dynamics for FGD and the performance of SO2 removal were investigated in a fixed-bed system for different samples. The results show that SG540 sample which made at 540 degrees C sintering temperature had the most quality among the four samples. It can adsorb SO2 of 38.9 mg for one gram SG540 sample. Different operating conditions for SG540 such as adsorption temperature, SO2 concentration in flue gas and the superficial velocity of flue gas were investigated. TiO2 particles for FGD had more efficiency than other physical sorbents such as active carbon and zeolite. The mechanism for SO2 removal was demonstrated by infrared (IR) spectroscopy and desorption test results to be mainly physical adsorption.

  15. A Unimodal Model for Double Observer Distance Sampling Surveys.

    PubMed

    Becker, Earl F; Christ, Aaron M

    2015-01-01

    Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.

  16. Using next-generation sequencing for high resolution multiplex analysis of copy number variation from nanogram quantities of DNA from formalin-fixed paraffin-embedded specimens.

    PubMed

    Wood, Henry M; Belvedere, Ornella; Conway, Caroline; Daly, Catherine; Chalkley, Rebecca; Bickerdike, Melissa; McKinley, Claire; Egan, Phil; Ross, Lisa; Hayward, Bruce; Morgan, Joanne; Davidson, Leslie; MacLennan, Ken; Ong, Thian K; Papagiannopoulos, Kostas; Cook, Ian; Adams, David J; Taylor, Graham R; Rabbitts, Pamela

    2010-08-01

    The use of next-generation sequencing technologies to produce genomic copy number data has recently been described. Most approaches, however, reply on optimal starting DNA, and are therefore unsuitable for the analysis of formalin-fixed paraffin-embedded (FFPE) samples, which largely precludes the analysis of many tumour series. We have sought to challenge the limits of this technique with regards to quality and quantity of starting material and the depth of sequencing required. We confirm that the technique can be used to interrogate DNA from cell lines, fresh frozen material and FFPE samples to assess copy number variation. We show that as little as 5 ng of DNA is needed to generate a copy number karyogram, and follow this up with data from a series of FFPE biopsies and surgical samples. We have used various levels of sample multiplexing to demonstrate the adjustable resolution of the methodology, depending on the number of samples and available resources. We also demonstrate reproducibility by use of replicate samples and comparison with microarray-based comparative genomic hybridization (aCGH) and digital PCR. This technique can be valuable in both the analysis of routine diagnostic samples and in examining large repositories of fixed archival material.

  17. AC electroosmosis in microchannels packed with a porous medium

    NASA Astrophysics Data System (ADS)

    Kang, Yuejun; Yang, Chun; Huang, Xiaoyang

    2004-08-01

    This paper presents a theoretical study on ac-driven electroosmotic flow in both open-end and closed-end microchannels packed with uniform charged spherical microparticles. The time-periodic oscillating electroosmotic flow in an open-end capillary in response to the application of an alternating (ac) electric field is obtained using the Green function approach. The analysis is based on the Carman-Kozeny theory. The backpressure associated with the counter-flow in a closed-end capillary is obtained by analytically solving the modified Brinkman momentum equation. It is demonstrated that in a microchannel with its two ends connected to reservoirs and subject to ambient pressure, the oscillating Darcy velocity profile depends on both the pore size and the excitation frequency; such effects are coupled through an important aspect ratio of the tubule radius to the Stokes penetration depth. For a fixed pore size, the magnitude of the ac electroosmotic flow decreases with increasing frequency. With increasing pore size, however, the magnitude of the maximum velocity shows two different trends with respect to the excitation frequency: it gets higher in the low frequency domain, and gets lower in the high frequency domain. In a microchannel with closed ends, for a fixed excitation frequency, use of smaller packing particles can generate higher backpressure. For a fixed pore size, the backpressure magnitude shows two different trends changing with the excitation frequency. When the excitation frequency is lower than the system characteristic frequency, the backpressure decreases with increasing excitation frequency. When the excitation frequency is higher than the system characteristic frequency, the backpressure increases with increasing excitation frequency.

  18. Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod

    USGS Publications Warehouse

    Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.

    2008-01-01

    To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.

  19. Extended Pausing by Humans on Multiple Fixed-Ratio Schedules with Varied Reinforcer Magnitude and Response Requirements

    PubMed Central

    Williams, Dean C; Saunders, Kathryn J; Perone, Michael

    2011-01-01

    We conducted three experiments to reproduce and extend Perone and Courtney's (1992) study of pausing at the beginning of fixed-ratio schedules. In a multiple schedule with unequal amounts of food across two components, they found that pigeons paused longest in the component associated with the smaller amount of food (the lean component), but only when it was preceded by the rich component. In our studies, adults with mild intellectual disabilities responded on a touch-sensitive computer monitor to produce money. In Experiment 1, the multiple-schedule components differed in both response requirement and reinforcer magnitude (i.e., the rich component required fewer responses and produced more money than the lean component). Effects shown with pigeons were reproduced in all 7 participants. In Experiment 2, we removed the stimuli that signaled the two schedule components, and participants' extended pausing was eliminated. In Experiment 3, to assess sensitivity to reinforcer magnitude versus fixed-ratio size, we presented conditions with equal ratio sizes but disparate magnitudes and conditions with equal magnitudes but disparate ratio sizes. Sensitivity to these manipulations was idiosyncratic. The present experiments obtained schedule control in verbally competent human participants and, despite procedural differences, we reproduced findings with animal participants. We showed that pausing is jointly determined by past conditions of reinforcement and stimuli correlated with upcoming conditions. PMID:21541121

  20. The correlation of social support with mental health: A meta-analysis.

    PubMed

    Harandi, Tayebeh Fasihi; Taghinasab, Maryam Mohammad; Nayeri, Tayebeh Dehghan

    2017-09-01

    Social support is an important factor that can affect mental health. In recent decades, many studies have been done on the impact of social support on mental health. The purpose of the present study is to investigate the effect size of the relationship between social support and mental health in studies in Iran. This meta-analysis was carried out in studies that were performed from 1996 through 2015. Databases included SID and Magiran, the comprehensive portal of human sciences, Noor specialized magazine databases, IRANDOC, Proquest, PubMed, Scopus, ERIC, Iranmedex and Google Scholar. The keywords used to search these websites included "mental health or general health," and "Iran" and "social support." In total, 64 studies had inclusion criteria meta-analysis. In order to collect data used from a meta-analysis worksheet that was made by the researcher and for data analysis software, CMA-2 was used. The mean of effect size of the 64 studies in the fixed-effect model and random-effect model was obtained respectively as 0.356 and 0.330, which indicated the moderate effect size of social support on mental health. The studies did not have publication bias, and enjoyed a heterogeneous effect size. The target population and social support questionnaire were moderator variables, but sex, sampling method, and mental health questionnaire were not moderator variables. Regarding relatively high effect size of the correlation between social support and mental health, it is necessary to predispose higher social support, especially for women, the elderly, patients, workers, and students.

  1. Escape from the cryptic species trap: lichen evolution on both sides of a cyanobacterial acquisition event.

    PubMed

    Schneider, Kevin; Resl, Philipp; Spribille, Toby

    2016-07-01

    Large, architecturally complex lichen symbioses arose only a few times in evolution, increasing thallus size by orders of magnitude over those from which they evolved. The innovations that enabled symbiotic assemblages to acquire and maintain large sizes are unknown. We mapped morphometric data against an eight-locus fungal phylogeny across one of the best-sampled thallus size transition events, the origins of the Placopsis lichen symbiosis, and used a phylogenetic comparative framework to explore the role of nitrogen-fixing cyanobacteria in size differences. Thallus thickness increased by >150% and fruiting body core volume increased ninefold on average after acquisition of cyanobacteria. Volume of cyanobacteria-containing structures (cephalodia), once acquired, correlates with thallus thickness in both phylogenetic generalized least squares and phylogenetic generalized linear mixed-effects analyses. Our results suggest that the availability of nitrogen is an important factor in the formation of large thalli. Cyanobacterial symbiosis appears to have enabled lichens to overcome size constraints in oligotrophic environments such as acidic, rain-washed rock surfaces. In the case of the Placopsis fungal symbiont, this has led to an adaptive radiation of more than 60 recognized species from related crustose members of the genus Trapelia. Our data suggest that precyanobacterial symbiotic lineages were constrained to forming a narrow range of phenotypes, so-called cryptic species, leading systematists until now to recognize only six of the 13 species clusters we identified in Trapelia. © 2016 The Authors. Molecular Ecology Published by John Wiley & Sons Ltd.

  2. Abdominal Obesity and Risk of Hip Fracture: A Systematic Review and Meta-Analysis of Prospective Studies.

    PubMed

    Sadeghi, Omid; Saneei, Parvaneh; Nasiri, Morteza; Larijani, Bagher; Esmaillzadeh, Ahmad

    2017-09-01

    Data on the association between general obesity and hip fracture were summarized in a 2013 meta-analysis; however, to our knowledge, no study has examined the association between abdominal obesity and the risk of hip fracture. The present systematic review and meta-analysis of prospective studies was undertaken to summarize the association between abdominal obesity and the risk of hip fracture. We searched online databases for relevant publications up to February 2017, using relevant keywords. In total, 14 studies were included in the systematic review and 9 studies, with a total sample size of 295,674 individuals (129,964 men and 165,703 women), were included in the meta-analysis. Participants were apparently healthy and aged ≥40 y. We found that abdominal obesity (defined by various waist-hip ratios) was positively associated with the risk of hip fracture (combined RR: 1.24, 95% CI: 1.05, 1.46, P = 0.01). Combining 8 effect sizes from 6 studies, we noted a marginally significant positive association between abdominal obesity (defined by various waist circumferences) and the risk of hip fracture (combined RR: 1.36; 95% CI: 0.97, 1.89, P = 0.07). This association became significant in a fixed-effects model (combined effect size: 1.40, 95% CI: 1.25, 1.58, P < 0.001). Based on 5 effect sizes, we found that a 0.1-U increase in the waist-hip ratio was associated with a 16% increase in the risk of hip fracture (combined RR: 1.16, 95% CI: 1.04, 1.29, P = 0.007), whereas a 10-cm increase in waist circumference was not significantly associated with a higher risk of hip fracture (combined RR: 1.13, 95% CI: 0.94, 1.36, P = 0.19). This association became significant, however, when we applied a fixed-effects model (combined effect size: 1.21, 95% CI: 1.15, 1.27, P < 0.001). We found that abdominal obesity was associated with a higher risk of hip fracture in 295,674 individuals. Further studies are needed to test whether there are associations between abdominal obesity and fractures at other bone sites. © 2017 American Society for Nutrition.

  3. The square lattice Ising model on the rectangle II: finite-size scaling limit

    NASA Astrophysics Data System (ADS)

    Hucht, Alfred

    2017-06-01

    Based on the results published recently (Hucht 2017 J. Phys. A: Math. Theor. 50 065201), the universal finite-size contributions to the free energy of the square lattice Ising model on the L× M rectangle, with open boundary conditions in both directions, are calculated exactly in the finite-size scaling limit L, M\\to∞ , T\\to Tc , with fixed temperature scaling variable x\\propto(T/Tc-1)M and fixed aspect ratio ρ\\propto L/M . We derive exponentially fast converging series for the related Casimir potential and Casimir force scaling functions. At the critical point T=Tc we confirm predictions from conformal field theory (Cardy and Peschel 1988 Nucl. Phys. B 300 377, Kleban and Vassileva 1991 J. Phys. A: Math. Gen. 24 3407). The presence of corners and the related corner free energy has dramatic impact on the Casimir scaling functions and leads to a logarithmic divergence of the Casimir potential scaling function at criticality.

  4. Mass Estimation and Its Applications

    DTIC Science & Technology

    2012-02-23

    parameters); e.g., the rect- angular kernel function has fixed width or fixed per unit size. But the rectangular function used in mass has no parameter...MassTER is implemented in JAVA , and we use DBSCAN in WEKA [13] and a version of DENCLUE implemented in R (www.r-project.org) in our empirical evaluation...Proceedings of SIGKDD, 2010, 989-998. [13] I.H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations

  5. Bayesian adaptive trials offer advantages in comparative effectiveness trials: an example in status epilepticus.

    PubMed

    Connor, Jason T; Elm, Jordan J; Broglio, Kristine R

    2013-08-01

    We present a novel Bayesian adaptive comparative effectiveness trial comparing three treatments for status epilepticus that uses adaptive randomization with potential early stopping. The trial will enroll 720 unique patients in emergency departments and uses a Bayesian adaptive design. The trial design is compared to a trial without adaptive randomization and produces an efficient trial in which a higher proportion of patients are likely to be randomized to the most effective treatment arm while generally using fewer total patients and offers higher power than an analogous trial with fixed randomization when identifying a superior treatment. When one treatment is superior to the other two, the trial design provides better patient care, higher power, and a lower expected sample size. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Numerical considerations on control of motion of nanoparticles using scattering field of laser light

    NASA Astrophysics Data System (ADS)

    Yokoi, Naomichi; Aizu, Yoshihisa

    2017-05-01

    Most of optical manipulation techniques proposed so far depend on carefully fabricated setups and samples. Similar conditions can be fixed in laboratories; however, it is still challenging to manipulate nanoparticles when the environment is not well controlled and is unknown in advance. Nonetheless, coherent light scattered by rough object generates a speckle pattern which consists of random interference speckle grains with well-defined statistical properties. In the present study, we numerically investigate the motion of a Brownian particle suspended in water under the illumination of a speckle pattern. Particle-captured time and size of particle-captured area are quantitatively estimated in relation to an optical force and a speckle diameter to confirm the feasibility of the present method for performing optical manipulation tasks such as trapping and guiding.

  7. Model Comparison of Nonlinear Structural Equation Models with Fixed Covariates.

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan

    2003-01-01

    Proposed a new nonlinear structural equation model with fixed covariates to deal with some complicated substantive theory and developed a Bayesian path sampling procedure for model comparison. Illustrated the approach with an illustrative example using data from an international study. (SLD)

  8. Measuring β-diversity with species abundance data.

    PubMed

    Barwell, Louise J; Isaac, Nick J B; Kunin, William E

    2015-07-01

    In 2003, 24 presence-absence β-diversity metrics were reviewed and a number of trade-offs and redundancies identified. We present a parallel investigation into the performance of abundance-based metrics of β-diversity. β-diversity is a multi-faceted concept, central to spatial ecology. There are multiple metrics available to quantify it: the choice of metric is an important decision. We test 16 conceptual properties and two sampling properties of a β-diversity metric: metrics should be 1) independent of α-diversity and 2) cumulative along a gradient of species turnover. Similarity should be 3) probabilistic when assemblages are independently and identically distributed. Metrics should have 4) a minimum of zero and increase monotonically with the degree of 5) species turnover, 6) decoupling of species ranks and 7) evenness differences. However, complete species turnover should always generate greater values of β than extreme 8) rank shifts or 9) evenness differences. Metrics should 10) have a fixed upper limit, 11) symmetry (βA,B  = βB,A ), 12) double-zero asymmetry for double absences and double presences and 13) not decrease in a series of nested assemblages. Additionally, metrics should be independent of 14) species replication 15) the units of abundance and 16) differences in total abundance between sampling units. When samples are used to infer β-diversity, metrics should be 1) independent of sample sizes and 2) independent of unequal sample sizes. We test 29 metrics for these properties and five 'personality' properties. Thirteen metrics were outperformed or equalled across all conceptual and sampling properties. Differences in sensitivity to species' abundance lead to a performance trade-off between sample size bias and the ability to detect turnover among rare species. In general, abundance-based metrics are substantially less biased in the face of undersampling, although the presence-absence metric, βsim , performed well overall. Only βBaselga R turn , βBaselga B-C turn and βsim measured purely species turnover and were independent of nestedness. Among the other metrics, sensitivity to nestedness varied >4-fold. Our results indicate large amounts of redundancy among existing β-diversity metrics, whilst the estimation of unseen shared and unshared species is lacking and should be addressed in the design of new abundance-based metrics. © 2015 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.

  9. Moments of catchment storm area

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.; Wang, Q.

    1985-01-01

    The portion of a catchment covered by a stationary rainstorm is modeled by the common area of two overlapping circles. Given that rain occurs within the catchment and conditioned by fixed storm and catchment sizes, the first two moments of the distribution of the common area are derived from purely geometrical considerations. The variance of the wetted fraction is shown to peak when the catchment size is equal to the size of the predominant storm. The conditioning on storm size is removed by assuming a probability distribution based upon the observed fractal behavior of cloud and rainstorm areas.

  10. Imaging Determinants of Clinical Effectiveness of Lumbar Transforaminal Epidural Steroid Injections.

    PubMed

    Maus, Timothy P; El-Yahchouchi, Christine A; Geske, Jennifer R; Carter, Rickey E; Kaufmann, Timothy J; Wald, John T; Diehn, Felix E

    2016-12-01

    To examine associations between imaging characteristics of compressive lesions and patient outcomes after lumbar transforaminal epidural steroid injections (TFESIs) stratified by steroid formulation (solution versus suspension). Retrospective observational study, academic radiology practice. A 516-patient sample was selected from 2,634 consecutive patients receiving lumbar TFESI for radicular pain. The advanced imaging study(s) preceding sampled TFESI were reviewed. Compressive lesions were described by a) nature of the lesion [disc herniation, fixed stenosis, synovial cyst, epidural fibrosis, no lesion] b) degree of neural compression [4 part scale], and c) presence of a tandem lesion. Associations between 2-month categorical outcomes (responder rates for pain, functional recovery) and imaging characteristics, stratified by steroid formulation, were examined with chi-squared tests of categorical outcomes and multivariable logistic regression models. Disc herniation patients had more responders for functional recovery than patients with fixed lesions (54% versus 38%, P = 0.01). Patients with fixed lesions receiving steroid solution (dexamethasone) had more responders for pain relief, with a similar trend for functional recovery, than patients receiving suspensions (59% versus 40%, P = 0.01). Outcomes for patients with fixed lesions treated with dexamethasone were not statistically different from those for disc herniation patients. Patients with single compressive lesions had more responders than those with tandem lesions (55% versus 41%, P = 0.03). In the entire sample, outcomes for disc herniations were more favorable than for fixed lesions. However, fixed lesions treated with dexamethasone had outcomes indistinguishable from disc herniations. Single lesions had better outcomes than tandem lesions. © 2016 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Analysis of Noise Mechanisms in Cell-Size Control.

    PubMed

    Modi, Saurabh; Vargas-Garcia, Cesar Augusto; Ghusinga, Khem Raj; Singh, Abhyudai

    2017-06-06

    At the single-cell level, noise arises from multiple sources, such as inherent stochasticity of biomolecular processes, random partitioning of resources at division, and fluctuations in cellular growth rates. How these diverse noise mechanisms combine to drive variations in cell size within an isoclonal population is not well understood. Here, we investigate the contributions of different noise sources in well-known paradigms of cell-size control, such as adder (division occurs after adding a fixed size from birth), sizer (division occurs after reaching a size threshold), and timer (division occurs after a fixed time from birth). Analysis reveals that variation in cell size is most sensitive to errors in partitioning of volume among daughter cells, and not surprisingly, this process is well regulated among microbes. Moreover, depending on the dominant noise mechanism, different size-control strategies (or a combination of them) provide efficient buffering of size variations. We further explore mixer models of size control, where a timer phase precedes/follows an adder, as has been proposed in Caulobacter crescentus. Although mixing a timer and an adder can sometimes attenuate size variations, it invariably leads to higher-order moments growing unboundedly over time. This results in a power-law distribution for the cell size, with an exponent that depends inversely on the noise in the timer phase. Consistent with theory, we find evidence of power-law statistics in the tail of C. crescentus cell-size distribution, although there is a discrepancy between the observed power-law exponent and that predicted from the noise parameters. The discrepancy, however, is removed after data reveal that the size added by individual newborns in the adder phase itself exhibits power-law statistics. Taken together, this study provides key insights into the role of noise mechanisms in size homeostasis, and suggests an inextricable link between timer-based models of size control and heavy-tailed cell-size distributions. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  12. The choice of product indicators in latent variable interaction models: post hoc analyses.

    PubMed

    Foldnes, Njål; Hagtvet, Knut Arne

    2014-09-01

    The unconstrained product indicator (PI) approach is a simple and popular approach for modeling nonlinear effects among latent variables. This approach leaves the practitioner to choose the PIs to be included in the model, introducing arbitrariness into the modeling. In contrast to previous Monte Carlo studies, we evaluated the PI approach by 3 post hoc analyses applied to a real-world case adopted from a research effort in social psychology. The measurement design applied 3 and 4 indicators for the 2 latent 1st-order variables, leaving the researcher with a choice among more than 4,000 possible PI configurations. Sixty so-called matched-pair configurations that have been recommended in previous literature are of special interest. In the 1st post hoc analysis we estimated the interaction effect for all PI configurations, keeping the real-world sample fixed. The estimated interaction effect was substantially affected by the choice of PIs, also across matched-pair configurations. Subsequently, a post hoc Monte Carlo study was conducted, with varying sample sizes and data distributions. Convergence, bias, Type I error and power of the interaction test were investigated for each matched-pair configuration and the all-pairs configuration. Variation in estimates across matched-pair configurations for a typical sample was substantial. The choice of specific configuration significantly affected convergence and the interaction test's outcome. The all-pairs configuration performed overall better than the matched-pair configurations. A further advantage of the all-pairs over the matched-pairs approach is its unambiguity. The final study evaluates the all-pairs configuration for small sample sizes and compares it to the non-PI approach of latent moderated structural equations. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  13. Bayes factors for testing inequality constrained hypotheses: Issues with prior specification.

    PubMed

    Mulder, Joris

    2014-02-01

    Several issues are discussed when testing inequality constrained hypotheses using a Bayesian approach. First, the complexity (or size) of the inequality constrained parameter spaces can be ignored. This is the case when using the posterior probability that the inequality constraints of a hypothesis hold, Bayes factors based on non-informative improper priors, and partial Bayes factors based on posterior priors. Second, the Bayes factor may not be invariant for linear one-to-one transformations of the data. This can be observed when using balanced priors which are centred on the boundary of the constrained parameter space with a diagonal covariance structure. Third, the information paradox can be observed. When testing inequality constrained hypotheses, the information paradox occurs when the Bayes factor of an inequality constrained hypothesis against its complement converges to a constant as the evidence for the first hypothesis accumulates while keeping the sample size fixed. This paradox occurs when using Zellner's g prior as a result of too much prior shrinkage. Therefore, two new methods are proposed that avoid these issues. First, partial Bayes factors are proposed based on transformed minimal training samples. These training samples result in posterior priors that are centred on the boundary of the constrained parameter space with the same covariance structure as in the sample. Second, a g prior approach is proposed by letting g go to infinity. This is possible because the Jeffreys-Lindley paradox is not an issue when testing inequality constrained hypotheses. A simulation study indicated that the Bayes factor based on this g prior approach converges fastest to the true inequality constrained hypothesis. © 2013 The British Psychological Society.

  14. Surface-water-quality assessment of the upper Illinois River Basin in Illinois, Indiana, and Wisconsin; fixed-station network and selected water-quality data for April 1987-September 1990

    USGS Publications Warehouse

    Sullivan, Daniel J.; Blanchard, Stephen F.

    1994-01-01

    This report describes and presents the sampling design, methods, quality assurance methods and results, and information on how to obtain data collected at eight fixed stations in the upper Illinois River Basin as part of the pilot phase of the National Water-Quality Assessment program. Data were collected monthly from April 1987-August l990; these data were supplemented with data collected during special events, including high and low flows. Each fixed station represents a cross section at which the transport of selected dissolved and suspended materials can be computed. Samples collected monthly and during special events were analyzed for concentrations of major ions, nutrients, trace elements, organic carbon, chlorophyll-a, suspended sediment, and other constituents. Field measurements of water temperature, pH, dissolved oxygen, specific conductance, and indicator bacteria also were made at each site. Samples of suspended sediment were analyzed for concentrations of major ions and trace elements. In addition, samples were analyzed seasonally for concentrations of antimony, bromide, molybdenum, and the radionuclides gross alpha and gross beta.

  15. Isolation of exosomes by differential centrifugation: Theoretical analysis of a commonly used protocol

    NASA Astrophysics Data System (ADS)

    Livshts, Mikhail A.; Khomyakova, Elena; Evtushenko, Evgeniy G.; Lazarev, Vassili N.; Kulemin, Nikolay A.; Semina, Svetlana E.; Generozov, Edward V.; Govorun, Vadim M.

    2015-11-01

    Exosomes, small (40-100 nm) extracellular membranous vesicles, attract enormous research interest because they are carriers of disease markers and a prospective delivery system for therapeutic agents. Differential centrifugation, the prevalent method of exosome isolation, frequently produces dissimilar and improper results because of the faulty practice of using a common centrifugation protocol with different rotors. Moreover, as recommended by suppliers, adjusting the centrifugation duration according to rotor K-factors does not work for “fixed-angle” rotors. For both types of rotors - “swinging bucket” and “fixed-angle” - we express the theoretically expected proportion of pelleted vesicles of a given size and the “cut-off” size of completely sedimented vesicles as dependent on the centrifugation force and duration and the sedimentation path-lengths. The proper centrifugation conditions can be selected using relatively simple theoretical estimates of the “cut-off” sizes of vesicles. Experimental verification on exosomes isolated from HT29 cell culture supernatant confirmed the main theoretical statements. Measured by the nanoparticle tracking analysis (NTA) technique, the concentration and size distribution of the vesicles after centrifugation agree with those theoretically expected. To simplify this “cut-off”-size-based adjustment of centrifugation protocol for any rotor, we developed a web-calculator.

  16. Improved repeatability of nasal potential difference with a larger surface catheter.

    PubMed

    Vermeulen, François; Proesmans, Marijke; Boon, Mieke; De Boeck, Kris

    2015-05-01

    To increase the power of nasal potential difference (NPD) as a biomarker of CFTR function, improvement of its repeatability is needed. We evaluated the improvement in repeatability resulting from measuring NPD (1) over a larger surface area and (2) at a fixed location. To assess repeatability, NPD was measured on two occasions with a new method using a larger surface catheter at fixed locations on the nasal floor (LSC-floor(5cm) and LSC-floor(3cm)) or at the most negative basal potential (LSC-floor(max)); with a sidehole catheter on the nasal floor at 5 cm) from the nasal margin (SHC-floor(5cm)) or at the most negative potential (SHC-floor(max)); and with an endhole catheter below the inferior surface of the lower turbinate at the most negative potential (EHC-turb(max)). The within-subject standard deviation (S(w)) for repeated measurements of the total chloride response in the controls was smallest with the LSC-floor at a fixed location (LSC-floor(5cm) 3.1 mV; 95% CI 2.3-4.6 mV) and highest with the SHC-floor (SHC-floor(max) 14.6 mV; 95% CI 10.9-22.2 mV) or the EHC-turbinate (EHC-turb(max) 12.5 mV; 95% CI 10.7-23.0 mV) at the most negative basal potential. Measuring with the LSC-floor at the maximal potential increased the Sw (LSC-floor(max) 8.8 mV, 95% CI 6.0-16.1 mV, p=0.009 vs LSC-floor(5cm)), while measuring with the SHC-floor at a fixed location slightly decreased the Sw (SHC-floor(5cm) 9.8 mV, 95% CI 8.9-20.6 mV, p=0.06 vs SHC-floor(max)). In patients with cystic fibrosis, the S(w) was comparable, between 2.2 mV and 4.3 mV. Sample size calculations for trials using NPD to assess changes in ion transport showed that the number of subjects to be included could be approximately halved measuring with the larger surface catheter at a fixed location vs SHC or EHC at fixed locations. Measuring the NPD at a fixed location and over a larger surface resulted in increased repeatability and thereby also power as a biomarker of CFTR modulation. Copyright © 2014 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.

  17. Effects of tissue fixation and dehydration on tendon collagen nanostructure.

    PubMed

    Turunen, Mikael J; Khayyeri, Hanifeh; Guizar-Sicairos, Manuel; Isaksson, Hanna

    2017-09-01

    Collagen is the most prominent protein in biological tissues. Tissue fixation is often required for preservation or sectioning of the tissue. This may affect collagen nanostructure and potentially provide incorrect information when analyzed after fixation. We aimed to unravel the effect of 1) ethanol and formalin fixation and 2) 24h air-dehydration on the organization and structure of collagen fibers at the nano-scale using small and wide angle X-ray scattering. Samples were divided into 4 groups: ethanol fixed, formalin fixed, and two untreated sample groups. Samples were allowed to air-dehydrate in handmade Kapton pockets during the measurements (24h) except for one untreated group. Ethanol fixation affected the collagen organization and nanostructure substantially and during 24h of dehydration dramatic changes were evident. Formalin fixation had minor effects on the collagen organization but after 12h of air-dehydration the spatial variation increased substantially, not evident in the untreated samples. Generally, collagen shrinkage and loss of alignment was evident in all samples during 24h of dehydration but the changes were subtle in all groups except the ethanol fixed samples. This study shows that tissue fixation needs to be chosen carefully in order to preserve the features of interest in the tissue. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Sources of variation in detection of wading birds from aerial surveys in the Florida Everglades

    USGS Publications Warehouse

    Conroy, M.J.; Peterson, J.T.; Bass, O.L.; Fonnesbeck, C.J.; Howell, J.E.; Moore, C.T.; Runge, J.P.

    2008-01-01

    We conducted dual-observer trials to estimate detection probabilities (probability that a group that is present and available is detected) for fixed-wing aerial surveys of wading birds in the Everglades system, Florida. Detection probability ranged from <0.2 to similar to 0.75 and varied according to species, group size, observer, and the observer's position in the aircraft (front or rear seat). Aerial-survey simulations indicated that incomplete detection can have a substantial effect oil assessment of population trends, particularly river relatively short intervals (<= 3 years) and small annual changes in population size (<= 3%). We conclude that detection bias is an important consideration for interpreting observations from aerial surveys of wading birds, potentially limiting the use of these data for comparative purposes and trend analyses. We recommend that workers conducting aerial surveys for wading birds endeavor to reduce observer and other controllable sources of detection bias and account for uncontrollable sources through incorporation of dual-observer or other calibratior methods as part of survey design (e.g., using double sampling).

  19. The SAMI Galaxy Survey: can we trust aperture corrections to predict star formation?

    NASA Astrophysics Data System (ADS)

    Richards, S. N.; Bryant, J. J.; Croom, S. M.; Hopkins, A. M.; Schaefer, A. L.; Bland-Hawthorn, J.; Allen, J. T.; Brough, S.; Cecil, G.; Cortese, L.; Fogarty, L. M. R.; Gunawardhana, M. L. P.; Goodwin, M.; Green, A. W.; Ho, I.-T.; Kewley, L. J.; Konstantopoulos, I. S.; Lawrence, J. S.; Lorente, N. P. F.; Medling, A. M.; Owers, M. S.; Sharp, R.; Sweet, S. M.; Taylor, E. N.

    2016-01-01

    In the low-redshift Universe (z < 0.3), our view of galaxy evolution is primarily based on fibre optic spectroscopy surveys. Elaborate methods have been developed to address aperture effects when fixed aperture sizes only probe the inner regions for galaxies of ever decreasing redshift or increasing physical size. These aperture corrections rely on assumptions about the physical properties of galaxies. The adequacy of these aperture corrections can be tested with integral-field spectroscopic data. We use integral-field spectra drawn from 1212 galaxies observed as part of the SAMI Galaxy Survey to investigate the validity of two aperture correction methods that attempt to estimate a galaxy's total instantaneous star formation rate. We show that biases arise when assuming that instantaneous star formation is traced by broad-band imaging, and when the aperture correction is built only from spectra of the nuclear region of galaxies. These biases may be significant depending on the selection criteria of a survey sample. Understanding the sensitivities of these aperture corrections is essential for correct handling of systematic errors in galaxy evolution studies.

  20. Image Discrimination Models With Stochastic Channel Selection

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    Many models of human image processing feature a large fixed number of channels representing cortical units varying in spatial position (visual field direction and eccentricity) and spatial frequency (radial frequency and orientation). The values of these parameters are usually sampled at fixed values selected to ensure adequate overlap considering the bandwidth and/or spread parameters, which are usually fixed. Even high levels of overlap does not always ensure that the performance of the model will vary smoothly with image translation or scale changes. Physiological measurements of bandwidth and/or spread parameters result in a broad distribution of estimated parameter values and the prediction of some psychophysical results are facilitated by the assumption that these parameters also take on a range of values. Selecting a sample of channels from a continuum of channels rather than using a fixed set can make model performance vary smoothly with changes in image position, scale, and orientation. It also facilitates the addition of spatial inhomogeneity, nonlinear feature channels, and focus of attention to channel models.

  1. Optimal sampling and quantization of synthetic aperture radar signals

    NASA Technical Reports Server (NTRS)

    Wu, C.

    1978-01-01

    Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.

  2. Applying the J-optimal channelized quadratic observer to SPECT myocardial perfusion defect detection

    NASA Astrophysics Data System (ADS)

    Kupinski, Meredith K.; Clarkson, Eric; Ghaly, Michael; Frey, Eric C.

    2016-03-01

    To evaluate performance on a perfusion defect detection task from 540 image pairs of myocardial perfusion SPECT image data we apply the J-optimal channelized quadratic observer (J-CQO). We compare AUC values of the linear Hotelling observer and J-CQO when the defect location is fixed and when it occurs in one of two locations. As expected, when the location is fixed a single channels maximizes AUC; location variability requires multiple channels to maximize the AUC. The AUC is estimated from both the projection data and reconstructed images. J-CQO is quadratic since it uses the first- and second- order statistics of the image data from both classes. The linear data reduction by the channels is described by an L x M channel matrix and in prior work we introduced an iterative gradient-based method for calculating the channel matrix. The dimensionality reduction from M measurements to L channels yields better estimates of these sample statistics from smaller sample sizes, and since the channelized covariance matrix is L x L instead of M x M, the matrix inverse is easier to compute. The novelty of our approach is the use of Jeffrey's divergence (J) as the figure of merit (FOM) for optimizing the channel matrix. We previously showed that the J-optimal channels are also the optimum channels for the AUC and the Bhattacharyya distance when the channel outputs are Gaussian distributed with equal means. This work evaluates the use of J as a surrogate FOM (SFOM) for AUC when these statistical conditions are not satisfied.

  3. Spatial cluster analysis of nanoscopically mapped serotonin receptors for classification of fixed brain tissue

    NASA Astrophysics Data System (ADS)

    Sams, Michael; Silye, Rene; Göhring, Janett; Muresan, Leila; Schilcher, Kurt; Jacak, Jaroslaw

    2014-01-01

    We present a cluster spatial analysis method using nanoscopic dSTORM images to determine changes in protein cluster distributions within brain tissue. Such methods are suitable to investigate human brain tissue and will help to achieve a deeper understanding of brain disease along with aiding drug development. Human brain tissue samples are usually treated postmortem via standard fixation protocols, which are established in clinical laboratories. Therefore, our localization microscopy-based method was adapted to characterize protein density and protein cluster localization in samples fixed using different protocols followed by common fluorescent immunohistochemistry techniques. The localization microscopy allows nanoscopic mapping of serotonin 5-HT1A receptor groups within a two-dimensional image of a brain tissue slice. These nanoscopically mapped proteins can be confined to clusters by applying the proposed statistical spatial analysis. Selected features of such clusters were subsequently used to characterize and classify the tissue. Samples were obtained from different types of patients, fixed with different preparation methods, and finally stored in a human tissue bank. To verify the proposed method, samples of a cryopreserved healthy brain have been compared with epitope-retrieved and paraffin-fixed tissues. Furthermore, samples of healthy brain tissues were compared with data obtained from patients suffering from mental illnesses (e.g., major depressive disorder). Our work demonstrates the applicability of localization microscopy and image analysis methods for comparison and classification of human brain tissues at a nanoscopic level. Furthermore, the presented workflow marks a unique technological advance in the characterization of protein distributions in brain tissue sections.

  4. Scaling in the vicinity of the four-state Potts fixed point

    NASA Astrophysics Data System (ADS)

    Blöte, H. W. J.; Guo, Wenan; Nightingale, M. P.

    2017-08-01

    We study a self-dual generalization of the Baxter-Wu model, employing results obtained by transfer matrix calculations of the magnetic scaling dimension and the free energy. While the pure critical Baxter-Wu model displays the critical behavior of the four-state Potts fixed point in two dimensions, in the sense that logarithmic corrections are absent, the introduction of different couplings in the up- and down triangles moves the model away from this fixed point, so that logarithmic corrections appear. Real couplings move the model into the first-order range, away from the behavior displayed by the nearest-neighbor, four-state Potts model. We also use complex couplings, which bring the model in the opposite direction characterized by the same type of logarithmic corrections as present in the four-state Potts model. Our finite-size analysis confirms in detail the existing renormalization theory describing the immediate vicinity of the four-state Potts fixed point.

  5. Mark-resight superpopulation estimation of a wintering elk Cervus elaphus canadensis herd

    USGS Publications Warehouse

    Gould, W.R.; Smallidge, S.T.; Thompson, B.C.

    2005-01-01

    We executed four mark-resight helicopter surveys during the winter months January-February for each of the three years 1999-2001 at 7-10 day intervals to estimate population size of a wintering elk Cervus elaphus canadensis herd in northern New Mexico. We counted numbers of radio-collared and uncollared elk on a simple random sample of quadrats from the study area. Because we were unable to survey the entire study area, we adopted a superpopulation approach to estimating population size, in which the total number of collared animals within and proximate to the entire study area was determined from an independent fixed-wing aircraft. The total number of collared animals available on the quadrats surveyed was also determined and facilitated detectability estimation. We executed superpopulation estimation via the joint hypergeometric estimator using the ratio of marked elk counted to the known number extant as an estimate of effective detectability. Superpopulation size estimates were approximately four times larger than previously suspected in the vicinity of the study area. Despite consistent survey methodology, actual detection rates varied within winter periods, indicating that multiple resight flights are important for improved estimator performance. Variable detectability also suggests that reliance on mere counts of observed individuals in our area may not accurately reflect abundance. ?? Wildlife Biology (2005).

  6. Sound absorption characteristics of aluminum foam with spherical cells

    NASA Astrophysics Data System (ADS)

    Li, Yunjie; Wang, Xinfu; Wang, Xingfu; Ren, Yuelu; Han, Fusheng; Wen, Cuie

    2011-12-01

    Aluminum foams were fabricated by an infiltration process. The foams possess spherical cells with a fixed porosity of 65% and varied pore sizes which ranged from 1.3 to 1.9 mm. The spherical cells are interconnected by small pores or pore openings on the cell walls that cause the foams show a characteristic of open cell structures. The sound absorption coefficient of the aluminum foams was measured by a standing wave tube and calculated by a transfer function method. It is shown that the sound absorption coefficient increases with an increase in the number of pore openings in the unit area or with a decrease of the diameter of the pore openings in the range of 0.3 to 0.4 mm. If backed with an air cavity, the resonant absorption peaks in the sound absorption coefficient versus frequency curves will be shifted toward lower frequencies as the cavity depth is increased. The samples with the same pore opening size but different pore size show almost the same absorption behavior, especially in the low frequency range. The present results are in good agreement with some theoretical predictions based on the acoustic impedance measurements of metal foams with circular apertures and cylindrical cavities and the principle of electroacoustic analogy.

  7. --No Title--

    Science.gov Websites

    caption-box,.carousel-caption,.fogbox>div{box-sizing:border-box}.fix{background-color:#ff0}.bio -title{color:#5e6a71;font-size:20px;margin-top:0}.topmargin{margin-top:2em}.bottommargin{margin-bottom {position:relative}.caption-box{background:rgba(0,0,0,.8);color:#fff;padding:1em;position:absolute;text-align:left}h3

  8. [Fractal features of soil particle size in the process of desertification in desert grassland of Ningxia, China].

    PubMed

    Yan, Xin; An, Hui

    2017-10-01

    The variation of soil properties, the fractal dimension of soil particle size, and the relationships between fractal dimension of soil particle size and soil properties in the process of desertification in desert grassland of Ningxia were discussed. The results showed that the fractal dimension (D) at different desertification stages in desert grassland varied greatly, the value of D was between 1.69 and 2.62. Except for the 10-20 cm soil layer, the value of D gradually declined with increa sing desertification of desert grassland at 0-30 cm soil layer. In the process of desertification in de-sert grassland, the grassland had the highest values of D , the volume percentage of clay and silt, and the lowest values of the volume percentage of very fine sand and fine sand. However, the mobile dunes had the lowest value of D , the volume percentage of clay and silt, and the highest value of the volume percentage of very fine sand and fine sand. There was a significant positive correlation between the soil fractal dimension value and the volume percentage of soil particles <50 μm, and a significant negative correlation between the soil fractal dimension value and the volume percentage of soil particles >50 μm. The grain size of 50 μm was the critical value for deciding the relationship between the soil particle fractal dimension and the volume percentage. Soil organic matter (SOM) and total nitrogen (TN) decreased gradually with increasing desertification of desert grassland, but soil bulk density increased gradually. Qualitative change from fixed dunes to semi fixed dunes with the rapid decrease of the volume percentage of clay and silt, SOM, TN and the rapid increase of volume percentage of very fine sand and fine sand, soil bulk density. Fractal dimension was significantly correlated to SOM, TN and soil bulk density. Fractal dimension 2.58 was a critical value of fixed dunes and semi fixed dunes. So, the fractal dimension of 2.58 could be taken as the desertification indicator of desert grassland.

  9. Rapid assessment of pulmonary gas transport with hyperpolarized 129Xe MRI using a 3D radial double golden-means acquisition with variable flip angles.

    PubMed

    Ruppert, Kai; Amzajerdian, Faraz; Hamedani, Hooman; Xin, Yi; Loza, Luis; Achekzai, Tahmina; Duncan, Ian F; Profka, Harrilla; Siddiqui, Sarmad; Pourfathi, Mehrdad; Cereda, Maurizio F; Kadlecek, Stephen; Rizi, Rahim R

    2018-04-22

    To demonstrate the feasibility of using a 3D radial double golden-means acquisition with variable flip angles to monitor pulmonary gas transport in a single breath hold with hyperpolarized xenon-129 MRI. Hyperpolarized xenon-129 MRI scans with interleaved gas-phase and dissolved-phase excitations were performed using a 3D radial double golden-means acquisition in mechanically ventilated rabbits. The flip angle was either held fixed at 15 ° or 5 °, or it was varied linearly in ascending or descending order between 5 ° and 15 ° over a sampling interval of 1000 spokes. Dissolved-phase and gas-phase images were reconstructed at high resolution (32 × 32 × 32 matrix size) using all 1000 spokes, or at low resolution (22 × 22 × 22 matrix size) using 400 spokes at a time in a sliding-window fashion. Based on these sliding-window images, relative change maps were obtained using the highest mean flip angle as the reference, and aggregated pixel-based changes were tracked. Although the signal intensities in the dissolve-phase maps were mostly constant in the fixed flip-angle acquisitions, they varied significantly as a function of average flip angle in the variable flip-angle acquisitions. The latter trend reflects the underlying changes in observed dissolve-phase magnetization distribution due to pulmonary gas uptake and transport. 3D radial double golden-means acquisitions with variable flip angles provide a robust means for rapidly assessing lung function during a single breath hold, thereby constituting a particularly valuable tool for imaging uncooperative or pediatric patient populations. © 2018 International Society for Magnetic Resonance in Medicine.

  10. The Constant Average Relationship Between Dust-obscured Star Formation and Stellar Mass from z=0 to z=2.5

    NASA Astrophysics Data System (ADS)

    Whitaker, Katherine E.; Pope, Alexandra; Cybulski, Ryan; Casey, Caitlin M.; Popping, Gergo; Yun, Min; 3D-HST Collaboration

    2018-01-01

    The total star formation budget of galaxies consists of the sum of the unobscured star formation, as observed in the rest-frame ultraviolet (UV), together with the obscured component that is absorbed and re-radiated by dust grains in the infrared. We explore how the fraction of obscured star formation depends (SFR) and stellar mass for mass-complete samples of galaxies at 0 < z < 2.5. We combine GALEX and WISE photometry for SDSS-selected galaxies with the 3D-HST treasury program and Spitzer/MIPS 24μm photometry in the well-studied 5 extragalactic CANDELS fields. We find a strong dependence of the fraction of obscured star formation (f_obscured=SFR_IR/SFR_UV+IR) on stellar mass, with remarkably little evolution in this fraction with redshift out to z=2.5. 50% of star formation is obscured for galaxies with log(M/M⊙)=9.4 although unobscured star formation dominates the budget at lower masses, there exists a tail of low mass extremely obscured star-forming galaxies at z > 1. For log(M/M⊙)>10.5, >90% of star formation is obscured at all redshifts. We also show that at fixed total SFR, f_obscured is lower at higher redshift. At fixed mass, high-redshift galaxies are observed to have more compact sizes and much higher star formation rates, gas fractions and hence surface densities (implying higher dust obscuration), yet we observe no redshift evolution in f_obscured with stellar mass. This poses a challenge to theoretical models to reproduce, where the observed compact sizes at high redshift seem in tension with lower dust obscuration.

  11. The Constant Average Relationship between Dust-obscured Star Formation and Stellar Mass from z = 0 to z = 2.5

    NASA Astrophysics Data System (ADS)

    Whitaker, Katherine E.; Pope, Alexandra; Cybulski, Ryan; Casey, Caitlin M.; Popping, Gergö; Yun, Min S.

    2017-12-01

    The total star formation budget of galaxies consists of the sum of the unobscured star formation, as observed in the rest-frame ultraviolet (UV), together with the obscured component that is absorbed and re-radiated by dust grains in the infrared. We explore how the fraction of obscured star formation depends on stellar mass for mass-complete samples of galaxies at 0< z< 2.5. We combine GALEX and WISE photometry for SDSS-selected galaxies with the 3D-HST treasury program and Spitzer/MIPS 24 μm photometry in the well-studied five extragalactic Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS) fields. We find a strong dependence of the fraction of obscured star formation (f obscured = SFRIR/SFRUV+IR) on stellar mass, with remarkably little evolution in this fraction with redshift out to z = 2.5. 50% of star formation is obscured for galaxies with log(M/M ⊙) = 9.4 although unobscured star formation dominates the budget at lower masses, there exists a tail of low-mass, extremely obscured star-forming galaxies at z> 1. For log(M/M ⊙) > 10.5, >90% of star formation is obscured at all redshifts. We also show that at fixed total SFR, {f}{obscured} is lower at higher redshift. At fixed mass, high-redshift galaxies are observed to have more compact sizes and much higher star formation rates, gas fractions, and hence surface densities (implying higher dust obscuration), yet we observe no redshift evolution in {f}{obscured} with stellar mass. This poses a challenge to theoretical models, where the observed compact sizes at high redshift seem in tension with lower dust obscuration.

  12. Bolt and nut evaluator

    NASA Technical Reports Server (NTRS)

    Kerley, James J. (Inventor); Burkhardt, Raymond (Inventor); White, Steven (Inventor)

    1994-01-01

    A device for testing fasteners such as nuts and bolts is described which consists of a fixed base plate having a number of threaded and unthreaded holes of varying size for receiving the fasteners to be tested, a torque marking paper taped on top the fixed base plate for marking torque-angle indicia, a torque wrench for applying torque to the fasteners being tested, and an indicator for showing the torque applied to the fastener. These elements provide a low cost, nondestructive device for verifying the strength of bolts and nuts.

  13. How Long Can Stool Samples Be Fixed for an Accurate Diagnosis of Soil-Transmitted Helminth Infection Using Mini-FLOTAC?

    PubMed Central

    Barda, Beatrice; Albonico, Marco; Ianniello, Davide; Ame, Shaali M.; Keiser, Jennifer; Speich, Benjamin; Rinaldi, Laura; Cringoli, Giuseppe; Burioni, Roberto; Montresor, Antonio; Utzinger, Jürg

    2015-01-01

    Background Kato-Katz is a widely used method for the diagnosis of soil-transmitted helminth infection. Fecal samples cannot be preserved, and hence, should be processed on the day of collection and examined under a microscope within 60 min of slide preparation. Mini-FLOTAC is a technique that allows examining fixed fecal samples. We assessed the performance of Mini-FLOTAC using formalin-fixed stool samples compared to Kato-Katz and determined the dynamics of prevalence and intensity estimates of soil-transmitted helminth infection over a 31-day time period. Methodology The study was carried out in late 2013 on Pemba Island, Tanzania. Forty-one children were enrolled and stool samples were subjected on the day of collection to a single Kato-Katz thick smear and Mini-FLOTAC examination; 12 aliquots of stool were fixed in 5% formalin and subsequently examined by Mini-FLOTAC up to 31 days after collection. Principal Findings The combined results from Kato-Katz and Mini-FLOTAC revealed that 100% of children were positive for Trichuris trichiura, 85% for Ascaris lumbricoides, and 54% for hookworm. Kato-Katz and Mini-FLOTAC techniques found similar prevalence estimates for A. lumbricoides (85% versus 76%), T. trichiura (98% versus 100%), and hookworm (42% versus 51%). The mean eggs per gram of stool (EPG) according to Kato-Katz and Mini-FLOTAC was 12,075 and 11,679 for A. lumbricoides, 1,074 and 1,592 for T. trichiura, and 255 and 220 for hookworm, respectively. The mean EPG from day 1 to 31 of fixation was stable for A. lumbricoides and T. trichiura, but gradually declined for hookworm, starting at day 15. Conclusions/Significance The findings of our study suggest that for a qualitative diagnosis of soil-transmitted helminth infection, stool samples can be fixed in 5% formalin for at least 30 days. However, for an accurate quantitative diagnosis of hookworm, we suggest a limit of 15 days of preservation. Our results have direct implication for integrating soil-transmitted helminthiasis into transmission assessment surveys for lymphatic filariasis. PMID:25848772

  14. Mixed quantum/classical theory of rotationally and vibrationally inelastic scattering in space-fixed and body-fixed reference frames

    NASA Astrophysics Data System (ADS)

    Semenov, Alexander; Babikov, Dmitri

    2013-11-01

    We formulated the mixed quantum/classical theory for rotationally and vibrationally inelastic scattering process in the diatomic molecule + atom system. Two versions of theory are presented, first in the space-fixed and second in the body-fixed reference frame. First version is easy to derive and the resultant equations of motion are transparent, but the state-to-state transition matrix is complex-valued and dense. Such calculations may be computationally demanding for heavier molecules and/or higher temperatures, when the number of accessible channels becomes large. In contrast, the second version of theory requires some tedious derivations and the final equations of motion are rather complicated (not particularly intuitive). However, the state-to-state transitions are driven by real-valued sparse matrixes of much smaller size. Thus, this formulation is the method of choice from the computational point of view, while the space-fixed formulation can serve as a test of the body-fixed equations of motion, and the code. Rigorous numerical tests were carried out for a model system to ensure that all equations, matrixes, and computer codes in both formulations are correct.

  15. Dose Rationalization of Pembrolizumab and Nivolumab Using Pharmacokinetic Modeling and Simulation and Cost Analysis.

    PubMed

    Ogungbenro, Kayode; Patel, Alkesh; Duncombe, Robert; Nuttall, Richard; Clark, James; Lorigan, Paul

    2018-04-01

    Pembrolizumab and nivolumab are highly selective anti-programmed cell death 1 (PD-1) antibodies approved for the treatment of advanced malignancies. Variable exposure and significant wastage have been associated with body size dosing of monoclonal antibodies (mAbs). The following dosing strategies were evaluated using simulations: body weight, dose banding, fixed dose, and pharmacokinetic (PK)-based methods. The relative cost to body weight dosing for band, fixed 150 mg and 200 mg, and PK-derived strategies were -15%, -25%, + 7%, and -16% for pembrolizumab and -8%, -6%, and -10% for band, fixed, and PK-derived strategies for nivolumab, respectively. Relative to mg/kg doses, the median exposures were -1.0%, -4.6%, + 27.1%, and +3.0% for band, fixed 150 mg, fixed 200 mg, and PK-derived strategies, respectively, for pembrolizumab and -3.1%, + 1.9%, and +1.4% for band, fixed 240 mg, and PK-derived strategies, respectively, for nivolumab. Significant wastage can be reduced by alternative dosing strategies without compromising exposure and efficacy. © 2017 American Society for Clinical Pharmacology and Therapeutics.

  16. Multicenter evaluation of a synthetic single-crystal diamond detector for CyberKnife small field size output factors.

    PubMed

    Russo, Serenella; Masi, Laura; Francescon, Paolo; Frassanito, Maria Cristina; Fumagalli, Maria Luisa; Marinelli, Marco; Falco, Maria Daniela; Martinotti, Anna Stefania; Pimpinella, Maria; Reggiori, Giacomo; Verona Rinati, Gianluca; Vigorito, Sabrina; Mancosu, Pietro

    2016-04-01

    The aim of the present work was to evaluate small field size output factors (OFs) using the latest diamond detector commercially available, PTW-60019 microDiamond, over different CyberKnife systems. OFs were measured also by silicon detectors routinely used by each center, considered as reference. Five Italian CyberKnife centers performed OFs measurements for field sizes ranging from 5 to 60mm, defined by fixed circular collimators (5 centers) and by Iris(™) variable aperture collimator (4 centers). Setup conditions were: 80cm source to detector distance, and 1.5cm depth in water. To speed up measurements two diamond detectors were used and their equivalence was evaluated. MonteCarlo (MC) correction factors for silicon detectors were used for comparing the OF measurements. Considering OFs values averaged over all centers, diamond data resulted lower than uncorrected silicon diode ones. The agreement between diamond and MC corrected silicon values was within 0.6% for all fixed circular collimators. Relative differences between microDiamond and MC corrected silicon diodes data for Iris(™) collimator were lower than 1.0% for all apertures in the totality of centers. The two microDiamond detectors showed similar characteristics, in agreement with the technical specifications. Excellent agreement between microDiamond and MC corrected silicon diode detectors OFs was obtained for both collimation systems fixed cones and Iris(™), demonstrating the microDiamond could be a suitable detector for CyberKnife commissioning and routine checks. These results obtained in five centers suggest that for CyberKnife systems microDiamond can be used without corrections even at the smallest field size. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.

  18. High-Grading Lunar Samples

    NASA Technical Reports Server (NTRS)

    Allen, Carlton; Sellar, Glenn; Nunez, Jorge; Mosie, Andrea; Schwarz, Carol; Parker, Terry; Winterhalter, Daniel; Farmer, Jack

    2009-01-01

    Astronauts on long-duration lunar missions will need the capability to high-grade their samples to select the highest value samples for transport to Earth and to leave others on the Moon. We are supporting studies to define the necessary and sufficient measurements and techniques for high-grading samples at a lunar outpost. A glovebox, dedicated to testing instruments and techniques for high-grading samples, is in operation at the JSC Lunar Experiment Laboratory. A reference suite of lunar rocks and soils, spanning the full compositional range found in the Apollo collection, is available for testing in this laboratory. Thin sections of these samples are available for direct comparison. The Lunar Sample Compendium, on-line at http://www-curator.jsc.nasa.gov/lunar/compendium.cfm, summarizes previous analyses of these samples. The laboratory, sample suite, and Compendium are available to the lunar research and exploration community. In the first test of possible instruments for lunar sample high-grading, we imaged 18 lunar rocks and four soils from the reference suite using the Multispectral Microscopic Imager (MMI) developed by Arizona State University and JPL (see Farmer et. al. abstract). The MMI is a fixed-focus digital imaging system with a resolution of 62.5 microns/pixel, a field size of 40 x 32 mm, and a depth-of-field of approximately 5 mm. Samples are illuminated sequentially by 21 light emitting diodes in discrete wavelengths spanning the visible to shortwave infrared. Measurements of reflectance standards and background allow calibration to absolute reflectance. ENVI-based software is used to produce spectra for specific minerals as well as multi-spectral images of rock textures.

  19. Bayesian dose selection design for a binary outcome using restricted response adaptive randomization.

    PubMed

    Meinzer, Caitlyn; Martin, Renee; Suarez, Jose I

    2017-09-08

    In phase II trials, the most efficacious dose is usually not known. Moreover, given limited resources, it is difficult to robustly identify a dose while also testing for a signal of efficacy that would support a phase III trial. Recent designs have sought to be more efficient by exploring multiple doses through the use of adaptive strategies. However, the added flexibility may potentially increase the risk of making incorrect assumptions and reduce the total amount of information available across the dose range as a function of imbalanced sample size. To balance these challenges, a novel placebo-controlled design is presented in which a restricted Bayesian response adaptive randomization (RAR) is used to allocate a majority of subjects to the optimal dose of active drug, defined as the dose with the lowest probability of poor outcome. However, the allocation between subjects who receive active drug or placebo is held constant to retain the maximum possible power for a hypothesis test of overall efficacy comparing the optimal dose to placebo. The design properties and optimization of the design are presented in the context of a phase II trial for subarachnoid hemorrhage. For a fixed total sample size, a trade-off exists between the ability to select the optimal dose and the probability of rejecting the null hypothesis. This relationship is modified by the allocation ratio between active and control subjects, the choice of RAR algorithm, and the number of subjects allocated to an initial fixed allocation period. While a responsive RAR algorithm improves the ability to select the correct dose, there is an increased risk of assigning more subjects to a worse arm as a function of ephemeral trends in the data. A subarachnoid treatment trial is used to illustrate how this design can be customized for specific objectives and available data. Bayesian adaptive designs are a flexible approach to addressing multiple questions surrounding the optimal dose for treatment efficacy within the context of limited resources. While the design is general enough to apply to many situations, future work is needed to address interim analyses and the incorporation of models for dose response.

  20. Effect of Ni-P Plating Temperature on Growth of Interfacial Intermetallic Compound in Electroless Nickel Immersion Gold/Sn-Ag-Cu Solder Joints

    NASA Astrophysics Data System (ADS)

    Seo, Wonil; Kim, Kyoung-Ho; Kim, Young-Ho; Yoo, Sehoon

    2018-01-01

    The growth of interfacial intermetallic compound and the brittle fracture behavior of Sn-3.0Ag-0.5-Cu solder (SAC305) joints on electroless nickel immersion gold (ENIG) surface finish have been investigated using Ni-P plating solution at temperatures from 75°C to 85°C and fixed pH of 4.5. SAC305 solder balls with diameter of 450 μm were mounted on the prepared ENIG-finished Cu pads and reflowed with peak temperature of 250°C. The interfacial intermetallic compound (IMC) thickness after reflow decreased with increasing Ni-P plating temperature. After 800 h of thermal aging, the IMC thickness of the sample prepared at 85°C was higher than for that prepared at 75°C. Scanning electron microscopy of the Ni-P surface after removal of the Au layer revealed a nodular structure on the Ni-P surface. The nodule size of the Ni-P decreased with increasing Ni-P plating temperature. The Cu content near the IMC layer increased to 0.6 wt.%, higher than the original Cu content of 0.5 wt.%, indicating that Cu diffused from the Cu pad to the solder ball through the Ni-P layer at a rate depending on the nodule size. The sample prepared at 75°C with thicker interfacial IMC showed greater high-speed shear strength than the sample prepared at 85°C. Brittle fracture increased with decreasing Ni-P plating temperature.

  1. Analysis of formalin-fixed, paraffin-embedded (FFPE) tissue via proteomic techniques and misconceptions of antigen retrieval.

    PubMed

    O'Rourke, Matthew B; Padula, Matthew P

    2016-01-01

    Since emerging in the late 19(th) century, formaldehyde fixation has become a standard method for preservation of tissues from clinical samples. The advantage of formaldehyde fixation is that fixed tissues can be stored at room temperature for decades without concern for degradation. This has led to the generation of huge tissue banks containing thousands of clinically significant samples. Here we review techniques for proteomic analysis of formalin-fixed, paraffin-embedded (FFPE) tissue samples with a specific focus on the methods used to extract and break formaldehyde crosslinks. We also discuss an error-of-interpretation associated with the technique known as "antigen retrieval." We have discovered that this term has been mistakenly applied to two disparate molecular techniques; therefore, we argue that a terminology change is needed to ensure accurate reporting of experimental results. Finally, we suggest that more investigation is required to fully understand the process of formaldehyde fixation and its subsequent reversal.

  2. Continuous-variable quantum cryptography is secure against non-Gaussian attacks.

    PubMed

    Grosshans, Frédéric; Cerf, Nicolas J

    2004-01-30

    A general study of arbitrary finite-size coherent attacks against continuous-variable quantum cryptographic schemes is presented. It is shown that, if the size of the blocks that can be coherently attacked by an eavesdropper is fixed and much smaller than the key size, then the optimal attack for a given signal-to-noise ratio in the transmission line is an individual Gaussian attack. Consequently, non-Gaussian coherent attacks do not need to be considered in the security analysis of such quantum cryptosystems.

  3. Influence of fragment size and postoperative joint congruency on long-term outcome of posterior malleolar fractures.

    PubMed

    Drijfhout van Hooff, Cornelis Christiaan; Verhage, Samuel Marinus; Hoogendoorn, Jochem Maarten

    2015-06-01

    One of the factors contributing to long-term outcome of posterior malleolar fractures is the development of osteoarthritis. Based on biomechanical, cadaveric, and small population studies, fixation of posterior malleolar fracture fragments (PMFFs) is usually performed when fragment size exceeds 25-33%. However, the influence of fragment size on long-term clinical and radiological outcome size remains unclear. A retrospective cohort study of 131 patients treated for an isolated ankle fracture with involvement of the posterior malleolus was performed. Mean follow-up was 6.9 (range, 2.5-15.9) years. Patients were divided into groups depending on size of the fragment, small (<5%, n = 20), medium (5-25%, n = 86), or large (>25%, n = 25), and presence of step-off after operative treatment. We have compared functional outcome measures (AOFAS, AAOS), pain (VAS), and dorsiflexion restriction compared to the contralateral ankle and the incidence of osteoarthritis on X-ray. There were no nonunions, 56% of patients had no radiographic osteoarthritis, VAS was 10 of 100, and median clinical score was 90 of 100. More osteoarthritis occurred in ankle fractures with medium and large PMFFs compared to small fragments (small 16%, medium 48%, large 54%; P = .006). Also when comparing small with medium-sized fragments (P = .02), larger fragment size did not lead to a significantly decreased function (median AOFAS 95 vs 88, P = .16). If the PMFF size was >5%, osteoarthritis occurred more frequently when there was a postoperative step-off ≥1 mm in the tibiotalar joint surface (41% vs 61%, P = .02) (whether the posterior fragment had been fixed or not). In this group, fixing the PMFF did not influence development of osteoarthritis. However, in 42% of the cases with fixation of the fragment a postoperative step-off remained (vs 45% in the group without fixation). Osteoarthritis is 1 component of long-term outcome of malleolar fractures, and the results of this study demonstrate that there was more radiographic osteoarthritis in patients with medium and large posterior fragments than in those with small fragments. Radiographic osteoarthritis also occurred more frequently when postoperative step-off was 1 mm or more, whether the posterior fragment was fixed or not. However, clinical scores were not different for these groups. Level IV, retrospective case series. © The Author(s) 2015.

  4. Sequential Measurement of Intermodal Variability in Public Transportation PM2.5 and CO Exposure Concentrations.

    PubMed

    Che, W W; Frey, H Christopher; Lau, Alexis K H

    2016-08-16

    A sequential measurement method is demonstrated for quantifying the variability in exposure concentration during public transportation. This method was applied in Hong Kong by measuring PM2.5 and CO concentrations along a route connecting 13 transportation-related microenvironments within 3-4 h. The study design takes into account ventilation, proximity to local sources, area-wide air quality, and meteorological conditions. Portable instruments were compacted into a backpack to facilitate measurement under crowded transportation conditions and to quantify personal exposure by sampling at nose level. The route included stops next to three roadside monitors to enable comparison of fixed site and exposure concentrations. PM2.5 exposure concentrations were correlated with the roadside monitors, despite differences in averaging time, detection method, and sampling location. Although highly correlated in temporal trend, PM2.5 concentrations varied significantly among microenvironments, with mean concentration ratios versus roadside monitor ranging from 0.5 for MTR train to 1.3 for bus terminal. Measured inter-run variability provides insight regarding the sample size needed to discriminate between microenvironments with increased statistical significance. The study results illustrate the utility of sequential measurement of microenvironments and policy-relevant insights for exposure mitigation and management.

  5. On the efficacy of spatial sampling using manual scanning paths to determine the spatial average sound pressure level in rooms.

    PubMed

    Hopkins, Carl

    2011-05-01

    In architectural acoustics, noise control and environmental noise, there are often steady-state signals for which it is necessary to measure the spatial average, sound pressure level inside rooms. This requires using fixed microphone positions, mechanical scanning devices, or manual scanning. In comparison with mechanical scanning devices, the human body allows manual scanning to trace out complex geometrical paths in three-dimensional space. To determine the efficacy of manual scanning paths in terms of an equivalent number of uncorrelated samples, an analytical approach is solved numerically. The benchmark used to assess these paths is a minimum of five uncorrelated fixed microphone positions at frequencies above 200 Hz. For paths involving an operator walking across the room, potential problems exist with walking noise and non-uniform scanning speeds. Hence, paths are considered based on a fixed standing position or rotation of the body about a fixed point. In empty rooms, it is shown that a circle, helix, or cylindrical-type path satisfy the benchmark requirement with the latter two paths being highly efficient at generating large number of uncorrelated samples. In furnished rooms where there is limited space for the operator to move, an efficient path comprises three semicircles with 45°-60° separations.

  6. Imaging the Drosophila retina: zwitterionic buffers PIPES and HEPES induce morphological artifacts in tissue fixation.

    PubMed

    Nie, Jing; Mahato, Simpla; Zelhof, Andrew C

    2015-02-03

    Tissue fixation is crucial for preserving the morphology of biological structures and cytological details to prevent postmortem degradation and autolysis. Improper fixation conditions could lead to artifacts and thus incorrect conclusions in immunofluorescence or histology experiments. To resolve reported structural anomalies with respect to Drosophila photoreceptor cell organization we developed and utilized a combination of live imaging and fixed samples to investigate the exact biogenesis and to identify the underlying source for the reported discrepancies in structure. We found that piperazine-N,N'-bis(ethanesulfonic acid) (PIPES) and 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES), two zwitterionic buffers commonly used in tissue fixation, can cause severe lumen and cell morphological defects in Drosophila pupal and adult retina; the inter-rhabdomeral lumen becomes dilated and the photoreceptor cells are significantly reduced in size. Correspondingly, the localization pattern of Eyes shut (EYS), a luminal protein, is severely altered. In contrast, tissues fixed in the phosphate buffered saline (PBS) buffer results in lumen and cell morphologies that are consistent with live imaging. We suggest that PIPES and HEPES buffers should be utilized with caution for fixation when examining the interplay between cells and their extracellular environment, especially in Drosophila pupal and adult retina research.

  7. Efficacy of Ginseng Supplements on Fatigue and Physical Performance: a Meta-analysis

    PubMed Central

    2016-01-01

    We conducted a meta-analysis to investigate the efficacy of ginseng supplements on fatigue reduction and physical performance enhancement as reported by randomized controlled trials (RCTs). RCTs that investigated the efficacy of ginseng supplements on fatigue reduction and physical performance enhancement compared with placebos were included. The main outcome measures were fatigue reduction and physical performance enhancement. Out of 155 articles meeting initial criteria, 12 RCTs involving 630 participants (311 participants in the intervention group and 319 participants in the placebo group) were included in the final analysis. In the fixed-effect meta-analysis of four RCTs, there was a statistically significant efficacy of ginseng supplements on fatigue reduction (standardized mean difference, SMD = 0.34; 95% confidence interval [CI] = 0.16 to 0.52). However, ginseng supplements were not associated with physical performance enhancement in the fixed-effect meta-analysis of eight RCTs (SMD = −0.01; 95% CI = −0.29 to 0.27). We found that there was insufficient clinical evidence to support the use of ginseng supplements on reducing fatigue and enhancing physical performance because only few RCTs with a small sample size have been published so far. Further lager RCTs are required to confirm the efficacy of ginseng supplements on fatigue reduction. PMID:27822924

  8. Imputation-Based Meta-Analysis of Severe Malaria in Three African Populations

    PubMed Central

    Band, Gavin; Le, Quang Si; Jostins, Luke; Pirinen, Matti; Kivinen, Katja; Jallow, Muminatou; Sisay-Joof, Fatoumatta; Bojang, Kalifa; Pinder, Margaret; Sirugo, Giorgio; Conway, David J.; Nyirongo, Vysaul; Kachala, David; Molyneux, Malcolm; Taylor, Terrie; Ndila, Carolyne; Peshu, Norbert; Marsh, Kevin; Williams, Thomas N.; Alcock, Daniel; Andrews, Robert; Edkins, Sarah; Gray, Emma; Hubbart, Christina; Jeffreys, Anna; Rowlands, Kate; Schuldt, Kathrin; Clark, Taane G.; Small, Kerrin S.; Teo, Yik Ying; Kwiatkowski, Dominic P.; Rockett, Kirk A.; Barrett, Jeffrey C.; Spencer, Chris C. A.

    2013-01-01

    Combining data from genome-wide association studies (GWAS) conducted at different locations, using genotype imputation and fixed-effects meta-analysis, has been a powerful approach for dissecting complex disease genetics in populations of European ancestry. Here we investigate the feasibility of applying the same approach in Africa, where genetic diversity, both within and between populations, is far more extensive. We analyse genome-wide data from approximately 5,000 individuals with severe malaria and 7,000 population controls from three different locations in Africa. Our results show that the standard approach is well powered to detect known malaria susceptibility loci when sample sizes are large, and that modern methods for association analysis can control the potential confounding effects of population structure. We show that pattern of association around the haemoglobin S allele differs substantially across populations due to differences in haplotype structure. Motivated by these observations we consider new approaches to association analysis that might prove valuable for multicentre GWAS in Africa: we relax the assumptions of SNP–based fixed effect analysis; we apply Bayesian approaches to allow for heterogeneity in the effect of an allele on risk across studies; and we introduce a region-based test to allow for heterogeneity in the location of causal alleles. PMID:23717212

  9. System to measure accurate temperature dependence of electric conductivity down to 20 K in ultrahigh vacuum.

    PubMed

    Sakai, C; Takeda, S N; Daimon, H

    2013-07-01

    We have developed the new in situ electrical-conductivity measurement system which can be operated in ultrahigh vacuum (UHV) with accurate temperature measurement down to 20 K. This system is mainly composed of a new sample-holder fixing mechanism, a new movable conductivity-measurement mechanism, a cryostat, and two receptors for sample- and four-probe holders. Sample-holder is pushed strongly against the receptor, which is connected to a cryostat, by using this new sample-holder fixing mechanism to obtain high thermal conductivity. Test pieces on the sample-holders have been cooled down to about 20 K using this fixing mechanism, although they were cooled down to only about 60 K without this mechanism. Four probes are able to be touched to a sample surface using this new movable conductivity-measurement mechanism for measuring electrical conductivity after making film on substrates or obtaining clean surfaces by cleavage, flashing, and so on. Accurate temperature measurement is possible since the sample can be transferred with a thermocouple and∕or diode being attached directly to the sample. A single crystal of Bi-based copper oxide high-Tc superconductor (HTSC) was cleaved in UHV to obtain clean surface, and its superconducting critical temperature has been successfully measured in situ. The importance of in situ measurement of resistance in UHV was demonstrated for this HTSC before and after cesium (Cs) adsorption on its surface. The Tc onset increase and the Tc offset decrease by Cs adsorption were observed.

  10. Extraction of DNA from human embryos after long-term preservation in formalin and Bouin's solutions.

    PubMed

    Nagai, Momoko; Minegishi, Katsura; Komada, Munekazu; Tsuchiya, Maiko; Kameda, Tomomi; Yamada, Shigehito

    2016-05-01

    The "Kyoto Collection of Human Embryos" at Kyoto University was begun in 1961. Although morphological analyses of samples in the Kyoto Collection have been performed, these embryos have been considered difficult to genetically analyze because they have been preserved in formalin or Bouin's solution for 20-50 years. Owing to the recent advances in molecular biology, it has become possible to extract DNA from long-term fixed tissues. The purpose of this study was to extract DNA from wet preparations of human embryo samples after long-term preservation in fixing solution. We optimized the DNA extraction protocol to be suitable for tissues that have been damaged by long-term fixation, including DNA-protein crosslinking damage. Diluting Li2 CO3 with 70% ethanol effectively removed picric acid from samples fixed in Bouin's solution. Additionally, 20.0 mg/mL proteinase was valuable to lyse the long-term fixed samples. The extracted DNA was checked with PCR amplification using several sets of primers and sequence analysis. The PCR products included at least 295- and 838-bp amplicons. These results show that the extracted DNA is applicable for genetic analyses, and indicate that old embryos in the Kyoto Collection should be made available for future studies. The protocol described in this study can successfully extract DNA from old specimens and, with improvements, should be applicable in research aiming to understand the molecular mechanisms of human congenital anomalies. © 2015 Japanese Teratology Society.

  11. High-Throughput Sequencing and Copy Number Variation Detection Using Formalin Fixed Embedded Tissue in Metastatic Gastric Cancer

    PubMed Central

    Hong, Min Eui; Do, In-Gu; Kang, So Young; Ha, Sang Yun; Kim, Seung Tae; Park, Se Hoon; Kang, Won Ki; Choi, Min-Gew; Lee, Jun Ho; Sohn, Tae Sung; Bae, Jae Moon; Kim, Sung; Kim, Duk-Hwan; Kim, Kyoung-Mee

    2014-01-01

    In the era of targeted therapy, mutation profiling of cancer is a crucial aspect of making therapeutic decisions. To characterize cancer at a molecular level, the use of formalin-fixed paraffin-embedded tissue is important. We tested the Ion AmpliSeq Cancer Hotspot Panel v2 and nCounter Copy Number Variation Assay in 89 formalin-fixed paraffin-embedded gastric cancer samples to determine whether they are applicable in archival clinical samples for personalized targeted therapies. We validated the results with Sanger sequencing, real-time quantitative PCR, fluorescence in situ hybridization and immunohistochemistry. Frequently detected somatic mutations included TP53 (28.17%), APC (10.1%), PIK3CA (5.6%), KRAS (4.5%), SMO (3.4%), STK11 (3.4%), CDKN2A (3.4%) and SMAD4 (3.4%). Amplifications of HER2, CCNE1, MYC, KRAS and EGFR genes were observed in 8 (8.9%), 4 (4.5%), 2 (2.2%), 1 (1.1%) and 1 (1.1%) cases, respectively. In the cases with amplification, fluorescence in situ hybridization for HER2 verified gene amplification and immunohistochemistry for HER2, EGFR and CCNE1 verified the overexpression of proteins in tumor cells. In conclusion, we successfully performed semiconductor-based sequencing and nCounter copy number variation analyses in formalin-fixed paraffin-embedded gastric cancer samples. High-throughput screening in archival clinical samples enables faster, more accurate and cost-effective detection of hotspot mutations or amplification in genes. PMID:25372287

  12. Description of CASCOMP Comprehensive Airship Sizing and Performance Computer Program, Volume 2

    NASA Technical Reports Server (NTRS)

    Davis, J.

    1975-01-01

    The computer program CASCOMP, which may be used in comparative design studies of lighter than air vehicles by rapidly providing airship size and mission performance data, was prepared and documented. The program can be used to define design requirements such as weight breakdown, required propulsive power, and physical dimensions of airships which are designed to meet specified mission requirements. The program is also useful in sensitivity studies involving both design trade-offs and performance trade-offs. The input to the program primarily consists of a series of single point values such as hull overall fineness ratio, number of engines, airship hull and empennage drag coefficients, description of the mission profile, and weights of fixed equipment, fixed useful load and payload. In order to minimize computation time, the program makes ample use of optional computation paths.

  13. From particle condensation to polymer aggregation

    NASA Astrophysics Data System (ADS)

    Janke, Wolfhard; Zierenberg, Johannes

    2018-01-01

    We draw an analogy between droplet formation in dilute particle and polymer systems. Our arguments are based on finite-size scaling results from studies of a two-dimensional lattice gas to three-dimensional bead-spring polymers. To set the results in perspective, we compare with in part rigorous theoretical scaling laws for canonical condensation in a supersaturated gas at fixed temperature, and derive corresponding scaling predictions for an undercooled gas at fixed density. The latter allows one to efficiently employ parallel multicanonical simulations and to reach previously not accessible scaling regimes. While the asymptotic scaling can not be observed for the comparably small polymer system sizes, they demonstrate an intermediate scaling regime also observable for particle condensation. Altogether, our extensive results from computer simulations provide clear evidence for the close analogy between particle condensation and polymer aggregation in dilute systems.

  14. No difference in joint awareness after mobile- and fixed-bearing total knee arthroplasty: 3-year follow-up of a randomized controlled trial.

    PubMed

    Schotanus, M G M; Pilot, P; Vos, R; Kort, N P

    2017-12-01

    To compare the patients ability to forget the artificial knee joint in everyday life who were randomized to be operated for mobile- or fixed-bearing total knee arthroplasty (TKA). This single-center randomized controlled trial evaluated the 3-year follow-up of the cemented mobile- and fixed-bearing TKA from the same brand in a series of 41 patients. Clinical examination was during the pre-, 6-week, 6-month, 1-, 2- and 3-year follow-up containing multiple patient-reported outcome measures (PROMs) including the 12-item Forgotten Joint Score (FJS-12) at 3 years. Effect size was calculated for each PROM at 3-year follow-up to quantify the size of the difference between both bearings. At 3-year follow-up, general linear mixed model analysis showed that there were no significant or clinically relevant differences between the two groups for all outcome measures. Calculated effect sizes were small (<0.3) for all the PROMs except for the FJS-12; these were moderate (0.5). The results of this study demonstrate that joint awareness was slightly lower in patients operated with the MB TKA with comparable improved clinical outcome and PROMs at 3-year follow-up. Measuring joint awareness with the FJS-12 is useful and provides more stringent information at 3-year follow-up compared to other PROMs and should be the PROM of choice at each follow-up after TKA. Level I, randomized controlled trial.

  15. ACVP-11: Quantitative Assessment of Lymphoid Tissue Fibrosis in Formalin-Fixed Tissue Specimens | Frederick National Laboratory for Cancer Research

    Cancer.gov

    The Tissue Analysis Core (TAC) within the AIDS and Cancer Virus Program will process, embed, and perform microtomy on fixed tissue samples presented in ethanol. Collagen I, Collagen III, or Fibronectin immunohistochemistry will be performed, in order

  16. Improved belief propagation algorithm finds many Bethe states in the random-field Ising model on random graphs

    NASA Astrophysics Data System (ADS)

    Perugini, G.; Ricci-Tersenghi, F.

    2018-01-01

    We first present an empirical study of the Belief Propagation (BP) algorithm, when run on the random field Ising model defined on random regular graphs in the zero temperature limit. We introduce the notion of extremal solutions for the BP equations, and we use them to fix a fraction of spins in their ground state configuration. At the phase transition point the fraction of unconstrained spins percolates and their number diverges with the system size. This in turn makes the associated optimization problem highly non trivial in the critical region. Using the bounds on the BP messages provided by the extremal solutions we design a new and very easy to implement BP scheme which is able to output a large number of stable fixed points. On one hand this new algorithm is able to provide the minimum energy configuration with high probability in a competitive time. On the other hand we found that the number of fixed points of the BP algorithm grows with the system size in the critical region. This unexpected feature poses new relevant questions about the physics of this class of models.

  17. Sequential sampling and biorational chemistries for management of lepidopteran pests of vegetable amaranth in the Caribbean.

    PubMed

    Clarke-Harris, Dionne; Fleischer, Shelby J

    2003-06-01

    Although vegetable amaranth, Amaranthus viridis L. and A. dubius Mart. ex Thell., production and economic importance is increasing in diversified peri-urban farms in Jamaica, lepidopteran herbivory is common even during weekly pyrethroid applications. We developed and validated a sampling plan, and investigated insecticides with new modes of action, for a complex of five species (Pyralidae: Spoladea recurvalis (F.), Herpetogramma bipunctalis (F.), Noctuidae: Spodoptera exigua (Hubner), S. frugiperda (J. E. Smith), and S. eridania Stoll). Significant within-plant variation occurred with H. bipunctalis, and a six-leaf sample unit including leaves from the inner and outer whorl was selected to sample all species. Larval counts best fit a negative binomial distribution. We developed a sequential sampling plan using a threshold of one larva per sample unit and the fitted distribution with a k(c) of 0.645. When compared with a fixed plan of 25 plants, sequential sampling recommended the same management decision on 87.5%, additional samples on 9.4%, and gave inaccurate recommendations on 3.1% of 32 farms, while reducing sample size by 46%. Insecticide frequency was reduced 33-60% when management decisions were based on sampled data compared with grower-standards, with no effect on crop damage. Damage remained high or variable (10-46%) with pyrethroid applications. Lepidopteran control was dramatically improved with ecdysone agonists (tebufenozide) or microbial metabolites (spinosyns and emamectin benzoate). This work facilitates resistance management efforts concurrent with the introduction of newer modes of action for lepidopteran control in leafy vegetable production in the Caribbean.

  18. Nonparametric estimation and testing of fixed effects panel data models

    PubMed Central

    Henderson, Daniel J.; Carroll, Raymond J.; Li, Qi

    2009-01-01

    In this paper we consider the problem of estimating nonparametric panel data models with fixed effects. We introduce an iterative nonparametric kernel estimator. We also extend the estimation method to the case of a semiparametric partially linear fixed effects model. To determine whether a parametric, semiparametric or nonparametric model is appropriate, we propose test statistics to test between the three alternatives in practice. We further propose a test statistic for testing the null hypothesis of random effects against fixed effects in a nonparametric panel data regression model. Simulations are used to examine the finite sample performance of the proposed estimators and the test statistics. PMID:19444335

  19. Comparison of methods for estimating the attributable risk in the context of survival analysis.

    PubMed

    Gassama, Malamine; Bénichou, Jacques; Dartois, Laureen; Thiébaut, Anne C M

    2017-01-23

    The attributable risk (AR) measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier's estimator, one semiparametric based on Cox's model, and one parametric based on the piecewise constant hazards model, as well as one simpler method based on estimated exposure prevalence at baseline and Cox's model hazard ratio. We considered a fixed binary exposure with varying exposure probabilities and strengths of association, and generated event times from a proportional hazards model with constant or monotonic (decreasing or increasing) Weibull baseline hazard, as well as from a nonproportional hazards model. We simulated 1,000 independent samples of size 1,000 or 10,000. The methods were compared in terms of mean bias, mean estimated standard error, empirical standard deviation and 95% confidence interval coverage probability at four equally spaced time points. Under proportional hazards, all five methods yielded unbiased results regardless of sample size. Nonparametric methods displayed greater variability than other approaches. All methods showed satisfactory coverage except for nonparametric methods at the end of follow-up for a sample size of 1,000 especially. With nonproportional hazards, nonparametric methods yielded similar results to those under proportional hazards, whereas semiparametric and parametric approaches that both relied on the proportional hazards assumption performed poorly. These methods were applied to estimate the AR of breast cancer due to menopausal hormone therapy in 38,359 women of the E3N cohort. In practice, our study suggests to use the semiparametric or parametric approaches to estimate AR as a function of time in cohort studies if the proportional hazards assumption appears appropriate.

  20. Influence of Ba/Fe mole ratios on magnetic properties, crystallite size and shifting of X-ray diffraction peaks of nanocrystalline BaFe12O19 powder, prepared by sol gel auto combu

    NASA Astrophysics Data System (ADS)

    Suastiyanti, Dwita; Sudarmaji, Arif; Soegijono, Bambang

    2012-06-01

    Barium hexaferrite BaFe12O19 (BFO) is of great importance as permanent magnets, particularly for magnetic recording as well as in microwave devices. Nano-crystalline BFO powders were prepared by sol gel auto combustion method in citric acid - metal nitrates system. Hence the mole ratios of Ba/Fe were variated at 1:12; 1:11.5 and 1:11. Ratio of cation to fuel was fixed at 1:1. An appropriate amount of amonia solution was added dropwise to this solution with constant stirring until the PH reached 7 in all cases. Heating at 850oC for 10 hours for each sample to get final formation of BFO nanocrystalline. The data from XRD showing the lattice parameters a,c and the unit-cell volume V, confirm that BFO with ratio 1:12 has same crystall parameters with ratio 1:11. Ratio of Ba/Fe 1:12 and 1:11 have diffraction pattern similarly at almost each 2 θ for each samples. Ratio of Ba/Fe 1: 11.5 has the finest crystallite size 22 nm. Almost diffraction pattern peaks of Ba/Fe 1:11.5 move to the left from of Ba/Fe 1:12 then return to diffraction pattern of Ba/Fe 1:12 for Ba/Fe 1:11. SEM observations show the particle size less than 100 nm and the same shape for each sample. Ratio of Ba/Fe 1: 12 gives the highest intrinsic coercive Hc = 427.3 kA/m. The highest remanent magnetization is at ratio 1:11 with Mr = 0.170 T. BFO with mole ratio 1:11.5 has the finest grain 22 nm, good magnetic properties and the highest value of best FoM 89%.

  1. Quality Control of RNA Preservation and Extraction from Paraffin-Embedded Tissue: Implications for RT-PCR and Microarray Analysis

    PubMed Central

    Pichler, Martin; Zatloukal, Kurt

    2013-01-01

    Analysis of RNA isolated from fixed and paraffin-embedded tissues is widely used in biomedical research and molecular pathological diagnostics. We have performed a comprehensive and systematic investigation of the impact of factors in the pre-analytical workflow, such as different fixatives, fixation time, RNA extraction method and storage of tissues in paraffin blocks, on several downstream reactions including complementary DNA (cDNA) synthesis, quantitative reverse transcription polymerase chain reaction (qRT-PCR) and microarray hybridization. We compared the effects of routine formalin fixation with the non-crosslinking, alcohol-based Tissue Tek Xpress Molecular Fixative (TTXMF, Sakura Finetek), and cryopreservation as gold standard for molecular analyses. Formalin fixation introduced major changes into microarray gene expression data and led to marked gene-to-gene variations in delta-ct values of qRT-PCR. We found that qRT-PCR efficiency and gene-to-gene variations were mainly attributed to differences in the efficiency of cDNA synthesis as the most sensitive step. These differences could not be reliably detected by quality assessment of total RNA isolated from formalin-fixed tissues by electrophoresis or spectrophotometry. Although RNA from TTXMF fixed samples was as fragmented as RNA from formalin fixed samples, much higher cDNA yield and lower ct-values were obtained in qRT-PCR underlining the negative impact of crosslinking by formalin. In order to better estimate the impact of pre-analytical procedures such as fixation on the reliability of downstream analysis, we applied a qRT-PCR-based assay using amplicons of different length and an assay measuring the efficiency of cDNA generation. Together these two assays allowed better quality assessment of RNA extracted from fixed and paraffin-embedded tissues and should be used to supplement quality scores derived from automated electrophoresis. A better standardization of the pre-analytical workflow, application of additional quality controls and detailed sample information would markedly improve the comparability and reliability of molecular studies based on formalin-fixed and paraffin-embedded tissue samples. PMID:23936242

  2. Seasonal variations in the diversity and abundance of diazotrophic communities across soils.

    PubMed

    Pereira e Silva, Michele C; Semenov, Alexander V; van Elsas, Jan Dirk; Salles, Joana Falcão

    2011-07-01

    The nitrogen (N)-fixing community is a key functional community in soil, as it replenishes the pool of biologically available N that is lost to the atmosphere via anaerobic ammonium oxidation and denitrification. We characterized the structure and dynamic changes in diazotrophic communities, based on the nifH gene, across eight different representative Dutch soils during one complete growing season, to evaluate the amplitude of the natural variation in abundance and diversity, and identify possible relationships with abiotic factors. Overall, our results indicate that soil type is the main factor influencing the N-fixing communities, which were more abundant and diverse in the clay soils (n=4) than in the sandy soils (n=4). On average, the amplitude of variation in community size as well as the range-weighted richness were also found to be higher in the clay soils. These results indicate that N-fixing communities associated with sandy and clay soil show a distinct amplitude of variation under field conditions, and suggest that the diazotrophic communities associated with clay soil might be more sensitive to fluctuations associated with the season and agricultural practices. Moreover, soil characteristics such as ammonium content, pH and texture most strongly correlated with the variations observed in the diversity, size and structure of N-fixing communities, whose relative importance was determined across a temporal and spatial scale. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  3. Comparison of potentially pathogenic free-living amoeba hosts by Legionella spp. in substrate-associated biofilms and floating biofilms from spring environments.

    PubMed

    Hsu, Bing-Mu; Huang, Chin-Chun; Chen, Jung-Sheng; Chen, Nai-Hsiung; Huang, Jen-Te

    2011-10-15

    This study compares five genera of free-living amoebae (FLA) hosts by Legionella spp. in the fixed and floating biofilm samples from spring environments. Detection rate of Legionella spp. was 26.9% for the floating biofilms and 3.1% for the fixed biofilms. Acanthamoeba spp., Hartmanella vermiformis, and Naegleria spp. were more frequently detected in floating biofilm than in fixed biofilm samples. The percentage of pathogenic Acanthamoeba spp. among all the genus Acanthamoeba detected positive samples was 19.6%. The potential pathogenic Naegleria spp. (for example, Naegleria australiensis, Naegleria philippinensis, and Naegleria italica) was 54.2% to all the Naegleria detected positive samples. In the study, 12 serotypes of possible pneumonia causing Legionella spp. were detected, and their percentage in all the Legionella containing samples was 42.4%. The FLA parasitized by Legionella included unnamed Acanthamoeba genotype, Acanthamoeba griffini, Acanthamoeba jacobsi, H. vermiformis, and N. australiensis. Significant differences were also observed between the presence/absence of H. vermiformis and Legionella parasitism in FLA. Comparisons between the culture-confirmed method and the PCR-based detection method for detecting FLA and Legionella in biofilms showed great variation. Therefore, using these analysis methods together to detect FLA and Legionella is recommended. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Cotton fiber quality characterization with light scattering and fourier transform infrared techniques.

    PubMed

    Thomasson, J A; Manickavasagam, S; Mengüç, M P

    2009-03-01

    Fiber quality measurement is critical to assessing the value of a bale of cotton for various textile purposes. An instrument that could measure numerous cotton quality properties by optical means could be made simpler and faster than current fiber quality measurement instruments, and it might be more amenable to on-line measurement at processing facilities. To that end, a laser system was used to investigate cotton fiber samples with respect to electromagnetic scattering at various wavelengths, polarization angles, and scattering angles. A Fourier transform infrared (FT-IR) instrument was also used to investigate the transmission of electromagnetic energy at various mid-infrared wavelengths. Cotton samples were selected to represent a wide range of micronaire values. Varying the wavelength of the laser at a fixed polarization resulted in little variation in scattered light among the cotton samples. However, varying the polarization at a fixed wavelength produced notable variation, indicating that polarization might be used to differentiate among cotton samples with respect to certain fiber properties. The FT-IR data in the 12 to 22 microm range produced relatively large differences in the amount of scattered light among all samples, and FT-IR data at certain combinations of fixed wavelengths were highly linearly related to certain measures of cotton quality including micronaire.

  5. A modular and compact portable mini-endstation for high-precision, high-speed fixed target serial crystallography at FEL and synchrotron sources

    DOE PAGES

    Sherrell, Darren A.; Foster, Andrew J.; Hudson, Lee; ...

    2015-01-01

    The design and implementation of a compact and portable sample alignment system suitable for use at both synchrotron and free-electron laser (FEL) sources and its performance are described. The system provides the ability to quickly and reliably deliver large numbers of samples using the minimum amount of sample possible, through positioning of fixed target arrays into the X-ray beam. The combination of high-precision stages, high-quality sample viewing, a fast controller and a software layer overcome many of the challenges associated with sample alignment. A straightforward interface that minimizes setup and sample changeover time as well as simplifying communication with themore » stages during the experiment is also described, together with an intuitive naming convention for defining, tracking and locating sample positions. Lastly, the setup allows the precise delivery of samples in predefined locations to a specific position in space and time, reliably and simply.« less

  6. Using Lagrangian sampling to study water quality during downstream transport in the San Luis Drain, California, USA

    USGS Publications Warehouse

    Volkmar, E.C.; Dahlgren, R.A.; Stringfellow, W.T.; Henson, S.S.; Borglin, S.E.; Kendall, C.; Van Nieuwenhuyse, E. E.

    2011-01-01

    To investigate the mechanism for diel (24h) changes commonly observed at fixed sampling locations and how these diel changes relate to downstream transport in hypereutrophic surface waters, we studied a parcel of agricultural drainage water as it traveled for 84h in a concrete-lined channel having no additional water inputs or outputs. Algal fluorescence, dissolved oxygen, temperature, pH, conductivity, and turbidity were measured every 30min. Grab samples were collected every 2h for water quality analyses, including nutrients, suspended sediment, and chlorophyll/pheophytin. Strong diel patterns were observed for dissolved oxygen, pH, and temperature within the parcel of water. In contrast, algal pigments and nitrate did not exhibit diel patterns within the parcel of water, but did exhibit strong diel patterns for samples collected at a fixed sampling location. The diel patterns observed at fixed sampling locations for these constituents can be attributed to algal growth during the day and downstream transport (washout) of algae at night. Algal pigments showed a rapid daytime increase during the first 48h followed by a general decrease for the remainder of the study, possibly due to sedimentation and photobleaching. Algal growth (primarily diatoms) was apparent each day during the study, as measured by increasing dissolved oxygen concentrations, despite low phosphate concentrations (<0.01mgL-1). ?? 2011 Elsevier B.V.

  7. The correlation of social support with mental health: A meta-analysis

    PubMed Central

    Harandi, Tayebeh Fasihi; Taghinasab, Maryam Mohammad; Nayeri, Tayebeh Dehghan

    2017-01-01

    Background and aim Social support is an important factor that can affect mental health. In recent decades, many studies have been done on the impact of social support on mental health. The purpose of the present study is to investigate the effect size of the relationship between social support and mental health in studies in Iran. Methods This meta-analysis was carried out in studies that were performed from 1996 through 2015. Databases included SID and Magiran, the comprehensive portal of human sciences, Noor specialized magazine databases, IRANDOC, Proquest, PubMed, Scopus, ERIC, Iranmedex and Google Scholar. The keywords used to search these websites included “mental health or general health,” and “Iran” and “social support.” In total, 64 studies had inclusion criteria meta-analysis. In order to collect data used from a meta-analysis worksheet that was made by the researcher and for data analysis software, CMA-2 was used. Results The mean of effect size of the 64 studies in the fixed-effect model and random-effect model was obtained respectively as 0.356 and 0.330, which indicated the moderate effect size of social support on mental health. The studies did not have publication bias, and enjoyed a heterogeneous effect size. The target population and social support questionnaire were moderator variables, but sex, sampling method, and mental health questionnaire were not moderator variables. Conclusion Regarding relatively high effect size of the correlation between social support and mental health, it is necessary to predispose higher social support, especially for women, the elderly, patients, workers, and students. PMID:29038699

  8. A simple quantitative diagnostic alternative for MGMT DNA-methylation testing on RCL2 fixed paraffin embedded tumors using restriction coupled qPCR.

    PubMed

    Pulverer, Walter; Hofner, Manuela; Preusser, Matthias; Dirnberger, Elisabeth; Hainfellner, Johannes A; Weinhaeusel, Andreas

    2014-01-01

    MGMT promoter methylation is associated with favorable prognosis and chemosensitivity in glioblastoma multiforme (GBM), especially in elderly patients. We aimed to develop a simple methylation-sensitive restriction enzyme (MSRE)-based quantitative PCR (qPCR) assay, allowing the quantification of MGMT promoter methylation. DNA was extracted from non-neoplastic brain (n = 24) and GBM samples (n = 20) upon 3 different sample conservation conditions (-80 °C, formalin-fixed and paraffin-embedded (FFPE); RCL2-fixed). We evaluated the suitability of each fixation method with respect to the MSRE-coupled qPCR methylation analyses. Methylation data were validated by MALDITOF. qPCR was used for evaluation of alternative tissue conservation procedures. DNA from FFPE tissue failed reliable testing; DNA from both RCL2-fixed and fresh frozen tissues performed equally well and was further used for validation of the quantitative MGMT methylation assay (limit of detection (LOD): 19.58 pg), using individual's undigested sample DNA for calibration. MGMT methylation analysis in non-neoplastic brain identified a background methylation of 0.10 ± 11% which we used for defining a cut-off of 0.32% for patient stratification. Of GBM patients 9 were MGMT methylationpositive (range: 0.56 - 91.95%), and 11 tested negative. MALDI-TOF measurements resulted in a concordant classification of 94% of GBM samples in comparison to qPCR. The presented methodology allows quantitative MGMT promoter methylation analyses. An amount of 200 ng DNA is sufficient for triplicate analyses including control reactions and individual calibration curves, thus excluding any DNA qualityderived bias. The combination of RCL2-fixation and quantitative methylation analyses improves pathological routine examination when histological and molecular analyses on limited amounts of tumor samples are necessary for patient stratification.

  9. A single mitochondrial haplotype and nuclear genetic differentiation in sympatric colour morphs of a riverine cichlid fish.

    PubMed

    Koblmüller, S; Sefc, K M; Duftner, N; Katongo, C; Tomljanovic, T; Sturmbauer, C

    2008-01-01

    Some of the diversity of lacustrine cichlid fishes has been ascribed to sympatric divergence, whereas diversification in rivers is generally driven by vicariance and geographic isolation. In the riverine Pseudocrenilabrus philander species complex, several morphologically highly distinct populations are restricted to particular river systems, sinkholes and springs in southern Africa. One of these populations consists of a prevalent yellow morph in sympatry with a less frequent blue morph, and no individuals bear intermediate phenotypes. Genetic variation in microsatellites and AFLP markers was very low in both morphs and one single mtDNA haplotype was fixed in all samples, indicating a very young evolutionary age and small effective population size. Nevertheless, the nuclear markers detected low but significant differentiation between the two morphs. The data suggest recent and perhaps sympatric divergence in the riverine habitat.

  10. Sequential CFAR detectors using a dead-zone limiter

    NASA Astrophysics Data System (ADS)

    Tantaratana, Sawasd

    1990-09-01

    The performances of some proposed sequential constant-false-alarm-rate (CFAR) detectors are evaluated. The observations are passed through a dead-zone limiter, the output of which is -1, 0, or +1, depending on whether the input is less than -c, between -c and c, or greater than c, where c is a constant. The test statistic is the sum of the outputs. The test is performed on a reduced set of data (those with absolute value larger than c), with the test statistic being the sum of the signs of the reduced set of data. Both constant and linear boundaries are considered. Numerical results show a significant reduction of the average number of observations needed to achieve the same false alarm and detection probabilities as a fixed-sample-size CFAR detector using the same kind of test statistic.

  11. A Study of the Errors of the Fixed-Node Approximation in Diffusion Monte Carlo

    NASA Astrophysics Data System (ADS)

    Rasch, Kevin M.

    Quantum Monte Carlo techniques stochastically evaluate integrals to solve the many-body Schrodinger equation. QMC algorithms scale favorably in the number of particles simulated and enjoy applicability to a wide range of quantum systems. Advances in the core algorithms of the method and their implementations paired with the steady development of computational assets have carried the applicability of QMC beyond analytically treatable systems, such as the Homogeneous Electron Gas, and have extended QMC's domain to treat atoms, molecules, and solids containing as many as several hundred electrons. FN-DMC projects out the ground state of a wave function subject to constraints imposed by our ansatz to the problem. The constraints imposed by the fixed-node Approximation are poorly understood. One key step in developing any scientific theory or method is to qualify where the theory is inaccurate and to quantify how erroneous it is under these circumstances. I investigate the fixed-node errors as they evolve over changing charge density, system size, and effective core potentials. I begin by studying a simple system for which the nodes of the trial wave function can be solved almost exactly. By comparing two trial wave functions, a single determinant wave function flawed in a known way and a nearly exact wave function, I show that the fixed-node error increases when the charge density is increased. Next, I investigate a sequence of Lithium systems increasing in size from a single atom, to small molecules, up to the bulk metal form. Over these systems, FN-DMC calculations consistently recover 95% or more of the correlation energy of the system. Given this accuracy, I make a prediction for the binding energy of Li4 molecule. Last, I turn to analyzing the fixed-node error in first and second row atoms and their molecules. With the appropriate pseudo-potentials, these systems are iso-electronic, show similar geometries and states. One would expect with identical number of particles involved in the calculation, errors in the respective total energies of the two iso-electronic species would be quite similar. I observe, instead, that the first row atoms and their molecules have errors larger by twice or more in size. I identify a cause for this difference in iso-electronic species. The fixed-node errors in all of these cases are calculated by careful comparison to experimental results, showing that FN-DMC to be a robust tool for understanding quantum systems and also a method for new investigations into the nature of many-body effects.

  12. 50 CFR 660.230 - Fixed gear fishery-management measures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... limit, size limit, scientific sorting designation, quota, harvest guideline, ACL or ACT or OY, if the... designation, quota, harvest guideline, ACL or ACT or OY applied.” The States of Washington, Oregon, and...

  13. 50 CFR 660.230 - Fixed gear fishery-management measures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... limit, size limit, scientific sorting designation, quota, harvest guideline, ACL or ACT or OY, if the... designation, quota, harvest guideline, ACL or ACT or OY applied.” The States of Washington, Oregon, and...

  14. Alternative Parameterizations for Cluster Editing

    NASA Astrophysics Data System (ADS)

    Komusiewicz, Christian; Uhlmann, Johannes

    Given an undirected graph G and a nonnegative integer k, the NP-hard Cluster Editing problem asks whether G can be transformed into a disjoint union of cliques by applying at most k edge modifications. In the field of parameterized algorithmics, Cluster Editing has almost exclusively been studied parameterized by the solution size k. Contrastingly, in many real-world instances it can be observed that the parameter k is not really small. This observation motivates our investigation of parameterizations of Cluster Editing different from the solution size k. Our results are as follows. Cluster Editing is fixed-parameter tractable with respect to the parameter "size of a minimum cluster vertex deletion set of G", a typically much smaller parameter than k. Cluster Editing remains NP-hard on graphs with maximum degree six. A restricted but practically relevant version of Cluster Editing is fixed-parameter tractable with respect to the combined parameter "number of clusters in the target graph" and "maximum number of modified edges incident to any vertex in G". Many of our results also transfer to the NP-hard Cluster Deletion problem, where only edge deletions are allowed.

  15. Representing perturbed dynamics in biological network models

    NASA Astrophysics Data System (ADS)

    Stoll, Gautier; Rougemont, Jacques; Naef, Felix

    2007-07-01

    We study the dynamics of gene activities in relatively small size biological networks (up to a few tens of nodes), e.g., the activities of cell-cycle proteins during the mitotic cell-cycle progression. Using the framework of deterministic discrete dynamical models, we characterize the dynamical modifications in response to structural perturbations in the network connectivities. In particular, we focus on how perturbations affect the set of fixed points and sizes of the basins of attraction. Our approach uses two analytical measures: the basin entropy H and the perturbation size Δ , a quantity that reflects the distance between the set of fixed points of the perturbed network and that of the unperturbed network. Applying our approach to the yeast-cell-cycle network introduced by Li [Proc. Natl. Acad. Sci. U.S.A. 101, 4781 (2004)] provides a low-dimensional and informative fingerprint of network behavior under large classes of perturbations. We identify interactions that are crucial for proper network function, and also pinpoint functionally redundant network connections. Selected perturbations exemplify the breadth of dynamical responses in this cell-cycle model.

  16. Collector Size or Range Independence of SNR in Fixed-Focus Remote Raman Spectrometry.

    PubMed

    Hirschfeld, T

    1974-07-01

    When sensitivity allows, remote Raman spectrometers can be operated at a fixed focus with purely electronic (easily multiplexable) range gating. To keep the background small, the system etendue must be minimized. For a maximum range larger than the hyperfocal one, this is done by focusing the system at roughly twice the minimum range at which etendue matching is still required. Under these conditions the etendue varies as the fourth power of the collector diameter, causing the background shot noise to vary as its square. As the signal also varies with the same power, and background noise is usually limiting in this type instrument, the SNR becomes independent of the collector size. Below this minimum etendue-matched range, the transmission at the limiting aperture grows with the square of the range, canceling the inverse square loss of signal with range. The SNR is thus range independent below the minimum etendue matched range and collector size independent above it, with the location of transition being determined by the system etendue and collector diameter. The range of validity of these outrageousstatements is discussed.

  17. Twyman effect mechanics in grinding and microgrinding.

    PubMed

    Lambropoulos, J C; Xu, S; Fang, T; Golini, D

    1996-10-01

    In the Twyman effect (1905), when one side of a thin plate with both sides polished is ground, the plate bends: The ground side becomes convex and is in a state of compressive residual stress, described in terms of force per unit length (Newtons per meter) induced by grinding, the stress (Newtons per square meter) induced by grinding, and the depth of the compressive layer (micrometers). We describe and correlate experiments on optical glasses from the literature in conditions of loose abrasive grinding (lapping at fixed nominal pressure, with abrasives 4-400 μm in size) and deterministic microgrinding experiments (at a fixed infeed rate) conducted at the Center for Optics Manufacturing with bound diamond abrasive tools (with a diamond size of 3-40 μm, embedded in metallic bond) and loose abrasive microgrinding (abrasives of less than 3 μm in size). In brittle grinding conditions, the grinding force and the depth of the compressive layer correlate well with glass mechanical properties describing the fracture process, such as indentation crack size. The maximum surface residual compressive stress decreases, and the depth of the compressive layer increases with increasing abrasive size. In lapping conditions the depth of the abrasive grain penetration into the glass surface scales with the surface roughness, and both are determined primarily by glass hardness and secondarily by Young's modulus for various abrasive sizes and coolants. In the limit of small abrasive size (ductile-mode grinding), the maximum surface compressive stress achieved is near the yield stress of the glass, in agreement with finite-element simulations of indentation in elastic-plastic solids.

  18. Statistical controversies in clinical research: building the bridge to phase II-efficacy estimation in dose-expansion cohorts.

    PubMed

    Boonstra, P S; Braun, T M; Taylor, J M G; Kidwell, K M; Bellile, E L; Daignault, S; Zhao, L; Griffith, K A; Lawrence, T S; Kalemkerian, G P; Schipper, M J

    2017-07-01

    Regulatory agencies and others have expressed concern about the uncritical use of dose expansion cohorts (DECs) in phase I oncology trials. Nonetheless, by several metrics-prevalence, size, and number-their popularity is increasing. Although early efficacy estimation in defined populations is a common primary endpoint of DECs, the types of designs best equipped to identify efficacy signals have not been established. We conducted a simulation study of six phase I design templates with multiple DECs: three dose-assignment/adjustment mechanisms multiplied by two analytic approaches for estimating efficacy after the trial is complete. We also investigated the effect of sample size and interim futility analysis on trial performance. Identifying populations in which the treatment is efficacious (true positives) and weeding out inefficacious treatment/populations (true negatives) are competing goals in these trials. Thus, we estimated true and false positive rates for each design. Adaptively updating the MTD during the DEC improved true positive rates by 8-43% compared with fixing the dose during the DEC phase while maintaining false positive rates. Inclusion of an interim futility analysis decreased the number of patients treated under inefficacious DECs without hurting performance. A substantial gain in efficiency is obtainable using a design template that statistically models toxicity and efficacy against dose level during expansion. Design choices for dose expansion should be motivated by and based upon expected performance. Similar to the common practice in single-arm phase II trials, cohort sample sizes should be justified with respect to their primary aim and include interim analyses to allow for early stopping. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  19. Genetic variation of Sargassum horneri populations detected by inter-simple sequence repeats.

    PubMed

    Ren, J R; Yang, R; He, Y Y; Sun, Q H

    2015-01-30

    The seaweed Sargassum horneri is an important brown alga in the marine environment, and it is an important raw material in the alginate industry. Unfortunately, the fixed resource that was originally reported is now reduced or disappeared, and increased floating populations have been reported in recent years. We sampled a floating population and 4 fixed cultivated populations of S. horneri along the coast of Zhejiang, China. Inter-simple sequence repeat (ISSR) markers were applied in this research to analyze the genetic variation between floating populations and fixed cultivated populations of S. horneri. In total, 220 loci were amplified with 23 ISSR primers. The percentage of polymorphic loci within each population ranged from 53.64 to 95.45%. The highest diversity was observed in population 3, which was the local species that was suspension cultured in the lab and then fixed cultivated in the Nanji Islands before sampling. The lowest diversity was obtained in the floating population 4. The genetic distances among the 5 S. horneri populations ranged from 0.0819 to 0.2889, and the distance tendency confirmed the genetic diversity. The results suggest that the floating population had the lowest genetic diversity and could not be joined into the cluster branch of the fixed cultivated populations.

  20. Fracture resistance and failure mode of posterior fixed dental prostheses fabricated with two zirconia CAD/CAM systems

    PubMed Central

    López-Suárez, Carlos; Gonzalo, Esther; Peláez, Jesús; Rodríguez, Verónica

    2015-01-01

    Background In recent years there has been an improvement of zirconia ceramic materials to replace posterior missing teeth. To date little in vitro studies has been carried out on the fracture resistance of zirconia veneered posterior fixed dental prostheses. This study investigated the fracture resistance and the failure mode of 3-unit zirconia-based posterior fixed dental prostheses fabricated with two CAD/CAM systems. Material and Methods Twenty posterior fixed dental prostheses were studied. Samples were randomly divided into two groups (n=10 each) according to the zirconia ceramic analyzed: Lava and Procera. Specimens were loaded until fracture under static load. Data were analyzed using Wilcoxon´s rank sum test and Wilcoxon´s signed-rank test (P<0.05). Results Partial fracture of the veneering porcelain occurred in 100% of the samples. Within each group, significant differences were shown between the veneering and the framework fracture resistance (P=0.002). The failure occurred in the connector cervical area in 80% of the cases. Conclusions All fracture load values of the zirconia frameworks could be considered clinically acceptable. The connector area is the weak point of the restorations. Key words:Fixed dental prostheses, zirconium-dioxide, zirconia, fracture resistance, failure mode. PMID:26155341

Top