Sample records for projected sample size

  1. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  2. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    PubMed

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  3. Determining the Population Size of Pond Phytoplankton.

    ERIC Educational Resources Information Center

    Hummer, Paul J.

    1980-01-01

    Discusses methods for determining the population size of pond phytoplankton, including water sampling techniques, laboratory analysis of samples, and additional studies worthy of investigation in class or as individual projects. (CS)

  4. 40 CFR 1042.310 - Engine selection for Category 1 and Category 2 engines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Category 2 engines. (a) Determine minimum sample sizes as follows: (1) For Category 1 engines, the minimum sample size is one engine or one percent of the projected U.S.-directed production volume for all your Category 1 engine families, whichever is greater. (2) For Category 2 engines, the minimum sample size is...

  5. ENHANCEMENT OF LEARNING ON SAMPLE SIZE CALCULATION WITH A SMARTPHONE APPLICATION: A CLUSTER-RANDOMIZED CONTROLLED TRIAL.

    PubMed

    Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz

    2017-01-01

    Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.

  6. Reexamining Sample Size Requirements for Multivariate, Abundance-Based Community Research: When Resources are Limited, the Research Does Not Have to Be.

    PubMed

    Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F

    2015-01-01

    Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.

  7. 40 CFR 94.505 - Sample selection for testing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... engine family. The required sample size is zero if a manufacturer's projected annual production for all Category 1 engine families is less than 100. (ii) The required sample size for a Category 2 engine family... manufacturer will begin to select engines from each Category 1 and Category 2 engine family for production line...

  8. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    PubMed Central

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  9. Investigating the unification of LOFAR-detected powerful AGN in the Boötes field

    NASA Astrophysics Data System (ADS)

    Morabito, Leah K.; Williams, W. L.; Duncan, Kenneth J.; Röttgering, H. J. A.; Miley, George; Saxena, Aayush; Barthel, Peter; Best, P. N.; Bruggen, M.; Brunetti, G.; Chyży, K. T.; Engels, D.; Hardcastle, M. J.; Harwood, J. J.; Jarvis, Matt J.; Mahony, E. K.; Prandoni, I.; Shimwell, T. W.; Shulevski, A.; Tasse, C.

    2017-08-01

    Low radio frequency surveys are important for testing unified models of radio-loud quasars and radio galaxies. Intrinsically similar sources that are randomly oriented on the sky will have different projected linear sizes. Measuring the projected linear sizes of these sources provides an indication of their orientation. Steep-spectrum isotropic radio emission allows for orientation-free sample selection at low radio frequencies. We use a new radio survey of the Boötes field at 150 MHz made with the Low-Frequency Array (LOFAR) to select a sample of radio sources. We identify 60 radio sources with powers P > 1025.5 W Hz-1 at 150 MHz using cross-matched multiwavelength information from the AGN and Galaxy Evolution Survey, which provides spectroscopic redshifts and photometric identification of 16 quasars and 44 radio galaxies. When considering the radio spectral slope only, we find that radio sources with steep spectra have projected linear sizes that are on average 4.4 ± 1.4 larger than those with flat spectra. The projected linear sizes of radio galaxies are on average 3.1 ± 1.0 larger than those of quasars (2.0 ± 0.3 after correcting for redshift evolution). Combining these results with three previous surveys, we find that the projected linear sizes of radio galaxies and quasars depend on redshift but not on power. The projected linear size ratio does not correlate with either parameter. The LOFAR data are consistent within the uncertainties with theoretical predictions of the correlation between the quasar fraction and linear size ratio, based on an orientation-based unification scheme.

  10. MSeq-CNV: accurate detection of Copy Number Variation from Sequencing of Multiple samples.

    PubMed

    Malekpour, Seyed Amir; Pezeshk, Hamid; Sadeghi, Mehdi

    2018-03-05

    Currently a few tools are capable of detecting genome-wide Copy Number Variations (CNVs) based on sequencing of multiple samples. Although aberrations in mate pair insertion sizes provide additional hints for the CNV detection based on multiple samples, the majority of the current tools rely only on the depth of coverage. Here, we propose a new algorithm (MSeq-CNV) which allows detecting common CNVs across multiple samples. MSeq-CNV applies a mixture density for modeling aberrations in depth of coverage and abnormalities in the mate pair insertion sizes. Each component in this mixture density applies a Binomial distribution for modeling the number of mate pairs with aberration in the insertion size and also a Poisson distribution for emitting the read counts, in each genomic position. MSeq-CNV is applied on simulated data and also on real data of six HapMap individuals with high-coverage sequencing, in 1000 Genomes Project. These individuals include a CEU trio of European ancestry and a YRI trio of Nigerian ethnicity. Ancestry of these individuals is studied by clustering the identified CNVs. MSeq-CNV is also applied for detecting CNVs in two samples with low-coverage sequencing in 1000 Genomes Project and six samples form the Simons Genome Diversity Project.

  11. A Bayesian Perspective on the Reproducibility Project: Psychology

    PubMed Central

    Etz, Alexander; Vandekerckhove, Joachim

    2016-01-01

    We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable. PMID:26919473

  12. A Bayesian Perspective on the Reproducibility Project: Psychology.

    PubMed

    Etz, Alexander; Vandekerckhove, Joachim

    2016-01-01

    We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors-a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis-for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.

  13. The Effect of Pixel Size on the Accuracy of Orthophoto Production

    NASA Astrophysics Data System (ADS)

    Kulur, S.; Yildiz, F.; Selcuk, O.; Yildiz, M. A.

    2016-06-01

    In our country, orthophoto products are used by the public and private sectors for engineering services and infrastructure projects, Orthophotos are particularly preferred due to faster and are more economical production according to vector digital photogrammetric production. Today, digital orthophotos provide an expected accuracy for engineering and infrastructure projects. In this study, the accuracy of orthophotos using pixel sizes with different sampling intervals are tested for the expectations of engineering and infrastructure projects.

  14. Using Sieving and Unknown Sand Samples for a Sedimentation-Stratigraphy Class Project with Linkage to Introductory Courses

    ERIC Educational Resources Information Center

    Videtich, Patricia E.; Neal, William J.

    2012-01-01

    Using sieving and sample "unknowns" for instructional grain-size analysis and interpretation of sands in undergraduate sedimentology courses has advantages over other techniques. Students (1) learn to calculate and use statistics; (2) visually observe differences in the grain-size fractions, thereby developing a sense of specific size…

  15. Project report : road weather information system phase II & IIb

    DOT National Transportation Integrated Search

    1997-09-15

    Data were collected on route choice and travel habits in the Lexington, Kentucky metropolitan area. The sample comprised 100 households, with the average household size 2.94 persons and with an average of 2.17 vehicles. This research project configur...

  16. ESTIMATING SAMPLE REQUIREMENTS FOR FIELD EVALUATIONS OF PESTICIDE LEACHING

    EPA Science Inventory

    A method is presented for estimating the number of samples needed to evaluate pesticide leaching threats to ground water at a desired level of precision. Sample size projections are based on desired precision (exhibited as relative tolerable error), level of confidence (90 or 95%...

  17. Size selective isocyanate aerosols personal air sampling using porous plastic foams

    NASA Astrophysics Data System (ADS)

    Khanh Huynh, Cong; Duc, Trinh Vu

    2009-02-01

    As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.

  18. Sediment laboratory quality-assurance project: studies of methods and materials

    USGS Publications Warehouse

    Gordon, J.D.; Newland, C.A.; Gray, J.R.

    2001-01-01

    In August 1996 the U.S. Geological Survey initiated the Sediment Laboratory Quality-Assurance project. The Sediment Laboratory Quality Assurance project is part of the National Sediment Laboratory Quality-Assurance program. This paper addresses the fmdings of the sand/fme separation analysis completed for the single-blind reference sediment-sample project and differences in reported results between two different analytical procedures. From the results it is evident that an incomplete separation of fme- and sand-size material commonly occurs resulting in the classification of some of the fme-size material as sand-size material. Electron microscopy analysis supported the hypothesis that the negative bias for fme-size material and the positive bias for sand-size material is largely due to aggregation of some of the fine-size material into sand-size particles and adherence of fine-size material to the sand-size grains. Electron microscopy analysis showed that preserved river water, which was low in dissolved solids, specific conductance, and neutral pH, showed less aggregation and adhesion than preserved river water that was higher in dissolved solids and specific conductance with a basic pH. Bacteria were also found growing in the matrix, which may enhance fme-size material aggregation through their adhesive properties. Differences between sediment-analysis methods were also investigated as pan of this study. Suspended-sediment concentration results obtained from one participating laboratory that used a total-suspended solids (TSS) method had greater variability and larger negative biases than results obtained when this laboratory used a suspended-sediment concentration method. When TSS methods were used to analyze the reference samples, the median suspended sediment concentration percent difference was -18.04 percent. When the laboratory used a suspended-sediment concentration method, the median suspended-sediment concentration percent difference was -2.74 percent. The percent difference was calculated as follows: Percent difference = (( reported mass - known mass)/known mass ) X 100.

  19. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    USGS Publications Warehouse

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  20. Globally maximizing, locally minimizing: unsupervised discriminant projection with applications to face and palm biometrics.

    PubMed

    Yang, Jian; Zhang, David; Yang, Jing-Yu; Niu, Ben

    2007-04-01

    This paper develops an unsupervised discriminant projection (UDP) technique for dimensionality reduction of high-dimensional data in small sample size cases. UDP can be seen as a linear approximation of a multimanifolds-based learning framework which takes into account both the local and nonlocal quantities. UDP characterizes the local scatter as well as the nonlocal scatter, seeking to find a projection that simultaneously maximizes the nonlocal scatter and minimizes the local scatter. This characteristic makes UDP more intuitive and more powerful than the most up-to-date method, Locality Preserving Projection (LPP), which considers only the local scatter for clustering or classification tasks. The proposed method is applied to face and palm biometrics and is examined using the Yale, FERET, and AR face image databases and the PolyU palmprint database. The experimental results show that UDP consistently outperforms LPP and PCA and outperforms LDA when the training sample size per class is small. This demonstrates that UDP is a good choice for real-world biometrics applications.

  1. The utility of point count surveys to predict wildlife interactions with wind energy facilities: An example focused on golden eagles

    USGS Publications Warehouse

    Sur, Maitreyi; Belthoff, James R.; Bjerre, Emily R.; Millsap, Brian A.; Katzner, Todd

    2018-01-01

    Wind energy development is rapidly expanding in North America, often accompanied by requirements to survey potential facility locations for existing wildlife. Within the USA, golden eagles (Aquila chrysaetos) are among the most high-profile species of birds that are at risk from wind turbines. To minimize golden eagle fatalities in areas proposed for wind development, modified point count surveys are usually conducted to estimate use by these birds. However, it is not always clear what drives variation in the relationship between on-site point count data and actual use by eagles of a wind energy project footprint. We used existing GPS-GSM telemetry data, collected at 15 min intervals from 13 golden eagles in 2012 and 2013, to explore the relationship between point count data and eagle use of an entire project footprint. To do this, we overlaid the telemetry data on hypothetical project footprints and simulated a variety of point count sampling strategies for those footprints. We compared the time an eagle was found in the sample plots with the time it was found in the project footprint using a metric we called “error due to sampling”. Error due to sampling for individual eagles appeared to be influenced by interactions between the size of the project footprint (20, 40, 90 or 180 km2) and the sampling type (random, systematic or stratified) and was greatest on 90 km2 plots. However, use of random sampling resulted in lowest error due to sampling within intermediate sized plots. In addition sampling intensity and sampling frequency both influenced the effectiveness of point count sampling. Although our work focuses on individual eagles (not the eagle populations typically surveyed in the field), our analysis shows both the utility of simulations to identify specific influences on error and also potential improvements to sampling that consider the context-specific manner that point counts are laid out on the landscape.

  2. Design and Weighting Methods for a Nationally Representative Sample of HIV-infected Adults Receiving Medical Care in the United States-Medical Monitoring Project

    PubMed Central

    Iachan, Ronaldo; H. Johnson, Christopher; L. Harding, Richard; Kyle, Tonja; Saavedra, Pedro; L. Frazier, Emma; Beer, Linda; L. Mattson, Christine; Skarbinski, Jacek

    2016-01-01

    Background: Health surveys of the general US population are inadequate for monitoring human immunodeficiency virus (HIV) infection because the relatively low prevalence of the disease (<0.5%) leads to small subpopulation sample sizes. Objective: To collect a nationally and locally representative probability sample of HIV-infected adults receiving medical care to monitor clinical and behavioral outcomes, supplementing the data in the National HIV Surveillance System. This paper describes the sample design and weighting methods for the Medical Monitoring Project (MMP) and provides estimates of the size and characteristics of this population. Methods: To develop a method for obtaining valid, representative estimates of the in-care population, we implemented a cross-sectional, three-stage design that sampled 23 jurisdictions, then 691 facilities, then 9,344 HIV patients receiving medical care, using probability-proportional-to-size methods. The data weighting process followed standard methods, accounting for the probabilities of selection at each stage and adjusting for nonresponse and multiplicity. Nonresponse adjustments accounted for differing response at both facility and patient levels. Multiplicity adjustments accounted for visits to more than one HIV care facility. Results: MMP used a multistage stratified probability sampling design that was approximately self-weighting in each of the 23 project areas and nationally. The probability sample represents the estimated 421,186 HIV-infected adults receiving medical care during January through April 2009. Methods were efficient (i.e., induced small, unequal weighting effects and small standard errors for a range of weighted estimates). Conclusion: The information collected through MMP allows monitoring trends in clinical and behavioral outcomes and informs resource allocation for treatment and prevention activities. PMID:27651851

  3. Capital Budgeting Decisions with Post-Audit Information

    DTIC Science & Technology

    1990-06-08

    estimates that were used during project selection. In similar fashion, this research introduces the equivalent sample size concept that permits the... equivalent sample size is extended to include the user’s prior beliefs. 4. For a management tool, the concepts for Cash Flow Control Charts are...Acoxxting Research , vol. 7, no. 2, Autumn 1969, pp. 215-244. [9] Gaynor, Edwin W., "Use of Control Charts in Cost Control ", National Association of Cost

  4. Non-Destructive Evaluation of Grain Structure Using Air-Coupled Ultrasonics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belvin, A. D.; Burrell, R. K.; Cole, E.G.

    2009-08-01

    Cast material has a grain structure that is relatively non-uniform. There is a desire to evaluate the grain structure of this material non-destructively. Traditionally, grain size measurement is a destructive process involving the sectioning and metallographic imaging of the material. Generally, this is performed on a representative sample on a periodic basis. Sampling is inefficient and costly. Furthermore, the resulting data may not provide an accurate description of the entire part's average grain size or grain size variation. This project is designed to develop a non-destructive acoustic scanning technique, using Chirp waveforms, to quantify average grain size and grain sizemore » variation across the surface of a cast material. A Chirp is a signal in which the frequency increases or decreases over time (frequency modulation). As a Chirp passes through a material, the material's grains reduce the signal (attenuation) by absorbing the signal energy. Geophysics research has shown a direct correlation with Chirp wave attenuation and mean grain size in geological structures. The goal of this project is to demonstrate that Chirp waveform attenuation can be used to measure grain size and grain variation in cast metals (uranium and other materials of interest). An off-axis ultrasonic inspection technique using air-coupled ultrasonics has been developed to determine grain size in cast materials. The technique gives a uniform response across the volume of the component. This technique has been demonstrated to provide generalized trends of grain variation over the samples investigated.« less

  5. mHealth Series: mHealth project in Zhao County, rural China – Description of objectives, field site and methods

    PubMed Central

    van Velthoven, Michelle Helena; Li, Ye; Wang, Wei; Du, Xiaozhen; Wu, Qiong; Chen, Li; Majeed, Azeem; Rudan, Igor; Zhang, Yanfeng; Car, Josip

    2013-01-01

    Background We set up a collaboration between researchers in China and the UK that aimed to explore the use of mHealth in China. This is the first paper in a series of papers on a large mHealth project part of this collaboration. This paper included the aims and objectives of the mHealth project, our field site, and the detailed methods of two studies. Field site The field site for this mHealth project was Zhao County, which lies 280 km south of Beijing in Hebei Province, China. Methods We described the methodology of two studies: (i) a mixed methods study exploring factors influencing sample size calculations for mHealth–based health surveys and (ii) a cross–over study determining validity of an mHealth text messaging data collection tool. The first study used mixed methods, both quantitative and qualitative, including: (i) two surveys with caregivers of young children, (ii) interviews with caregivers, village doctors and participants of the cross–over study, and (iii) researchers’ views. We combined data from caregivers, village doctors and researchers to provide an in–depth understanding of factors influencing sample size calculations for mHealth–based health surveys. The second study, a cross–over study, used a randomised cross–over study design to compare the traditional face–to–face survey method to the new text messaging survey method. We assessed data equivalence (intrarater agreement), the amount of information in responses, reasons for giving different responses, the response rate, characteristics of non–responders, and the error rate. Conclusions This paper described the objectives, field site and methods of a large mHealth project part of a collaboration between researchers in China and the UK. The mixed methods study evaluating factors that influence sample size calculations could help future studies with estimating reliable sample sizes. The cross–over study comparing face–to–face and text message survey data collection could help future studies with developing their mHealth tools. PMID:24363919

  6. Battery condenser system particulate emission factors for cotton gins: Particle size distribution characteristics

    USDA-ARS?s Scientific Manuscript database

    This report is part of a project to characterize cotton gin emissions from the standpoint of total particulate stack sampling and particle size analyses. In 2013, the Environmental Protection Agency (EPA) published a more stringent standard for particulate matter with nominal diameter less than or e...

  7. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    PubMed

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  8. Group Projects in Social Work Education: The Influence of Group Characteristics and Moderators on Undergraduate Student Outcomes

    ERIC Educational Resources Information Center

    Postlethwait, Ariana E.

    2016-01-01

    This study examined the impact of group size, group formation, group conflict, and division of labor on student outcomes in a group project for a sample of 112 BSW research seminar students at a large university in the Midwest. Students completed surveys on their experiences with the group project at the end of the semester. Multiple regression…

  9. Geometrical characteristics of sandstone with different sample sizes

    NASA Astrophysics Data System (ADS)

    Cheon, D. S.; Takahashi, M., , Dr

    2017-12-01

    In many rock engineering projects such as CO2 underground storage, engineering geothermal system, it is important things to understand the fluid flow behavior in the deep geological conditions. This fluid flow is generally affected by the geometrical characteristics of rock, especially porous media. Furthermore, physical properties in rock may depend on the existence of voids space in rock. Total porosity and pore size distribution can be measured by Mercury Intrusion Porosimetry and the other geometrical and spatial information of pores can be obtained through micro-focus X-ray CT. Using the micro-focus X-ray CT, we obtained the extracted void space and transparent image from the original CT voxel images of with different sample sizes like 1 mm, 2 mm, 3 mm cubes. The test samples are Berea sandstone and Otway sandstone. The former is well-known sandstone and it is used for the standard sample to compared to the result from the Otway sandstone. Otway sandstone was obtained from the CO2CRC Otway pilot site for the CO2 geosequestraion project. From the X-ray scan and ExFACT software, we get the informations including effective pore radii, coordination number, tortuosity and effective throat/pore radius ratio etc. The geometrical information analysis showed that for Berea sandstone and Otway sandstone, there is rarely differences with different sample sizes and total value of coordination number show high porosity, the tortuosity of Berea sandstone is higher than the Otway sandstone. In the future, these information will be used for the permeability of the samples.

  10. Adaptive cluster sampling: An efficient method for assessing inconspicuous species

    Treesearch

    Andrea M. Silletti; Joan Walker

    2003-01-01

    Restorationistis typically evaluate the success of a project by estimating the population sizes of species that have been planted or seeded. Because total census is raely feasible, they must rely on sampling methods for population estimates. However, traditional random sampling designs may be inefficient for species that, for one reason or another, are challenging to...

  11. Research profile of physiotherapy undergraduates in Nigeria.

    PubMed

    Adeniyi, Ade F; Ekechukwu, Nelson E; Umar, Lawan; Ogwumike, Omoyemi O

    2013-01-01

    Physiotherapy training in Nigeria is almost 50 years old with no history of appraisal of research projects produced by the physiotherapy students. Physiotherapy students complete research projects in partial fulfilment of the requirements for graduation. An appraisal will reveal areas of strength and weakness in the research requirement for students, potentially leading to better research capacity and promoting evidence-based clinical practice among graduates. This study describes issues related to the study design, scope, statistical analysis and supervision of physiotherapy undergraduates in Nigerian universities. This retrospective study analysed 864 projects undertaken by Nigerian physiotherapy students between years 2000 and 2010. A maximum of 20 projects per academic year were randomly selected from each of the seven physiotherapy institutions in Nigeria. Data were obtained using a self-designed data retrieval form and analysed using descriptive and inferential statistics. Cross-sectional surveys constituted 47.6% of the research projects with mainly non-probability sampling (57.7%) and lack of objective sample size determination in 91.6% of the projects. Most projects (56.4%) did not report any ethical approval. The particular university attended (χ2 = 109.5, P = 0.0001), type of degree offered (χ2 = 47.24, P = 0.00001) and the academic qualification of supervisors (χ2 = 21.99, P = 0.001) were significantly related to the strength of the research design executed by students. Most research projects carried out by Nigerian physiotherapy students were cross-sectional, characterised by arbitrary sample sizes, and were conducted on human subjects but most without report of ethical approval. Efforts to improve research methodology, documentation and exploration of a wider range of research areas are needed to strengthen this educational experience for students.

  12. EPICS Controlled Collimator for Controlling Beam Sizes in HIPPO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Napolitano, Arthur Soriano; Vogel, Sven C.

    2017-08-03

    Controlling the beam spot size and shape in a diffraction experiment determines the probed sample volume. The HIPPO - High-Pressure-Preferred Orientation– neutron time-offlight diffractometer is located at the Lujan Neutron Scattering Center in Los Alamos National Laboratories. HIPPO characterizes microstructural parameters, such as phase composition, strains, grain size, or texture, of bulk (cm-sized) samples. In the current setup, the beam spot has a 10 mm diameter. Using a collimator, consisting of two pairs of neutron absorbing boron-nitride slabs, horizontal and vertical dimensions of a rectangular beam spot can be defined. Using the HIPPO robotic sample changer for sample motion, themore » collimator would enable scanning of e.g. cylindrical samples along the cylinder axis by probing slices of such samples. The project presented here describes implementation of such a collimator, in particular the motion control software. We utilized the EPICS (Experimental Physics Interface and Control System) software interface to integrate the collimator control into the HIPPO instrument control system. Using EPICS, commands are sent to commercial stepper motors that move the beam windows.« less

  13. Prediction of accrual closure date in multi-center clinical trials with discrete-time Poisson process models.

    PubMed

    Tang, Gong; Kong, Yuan; Chang, Chung-Chou Ho; Kong, Lan; Costantino, Joseph P

    2012-01-01

    In a phase III multi-center cancer clinical trial or a large public health study, sample size is predetermined to achieve desired power, and study participants are enrolled from tens or hundreds of participating institutions. As the accrual is closing to the target size, the coordinating data center needs to project the accrual closure date on the basis of the observed accrual pattern and notify the participating sites several weeks in advance. In the past, projections were simply based on some crude assessment, and conservative measures were incorporated in order to achieve the target accrual size. This approach often resulted in excessive accrual size and subsequently unnecessary financial burden on the study sponsors. Here we proposed a discrete-time Poisson process-based method to estimate the accrual rate at time of projection and subsequently the trial closure date. To ensure that target size would be reached with high confidence, we also proposed a conservative method for the closure date projection. The proposed method was illustrated through the analysis of the accrual data of the National Surgical Adjuvant Breast and Bowel Project trial B-38. The results showed that application of the proposed method could help to save considerable amount of expenditure in patient management without compromising the accrual goal in multi-center clinical trials. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Thirst distress and interdialytic weight gain: how do they relate?

    PubMed

    Jacob, Sheena; Locking-Cusolito, Heather

    2004-01-01

    Thirst is a frequent and stressful symptom experienced by hemodialysis patients. Several studies have noted a positive relationship between thirst and interdialytic weight gain (IDWG). These factors prompted us to consider ways that we could intervene to reduce thirst and IDWG through an educative, supportive nursing intervention. This paper presents the results of a pilot research project, the purpose of which was to: examine the relationship between thirst distress (the negative symptoms associated with thirst) and IDWG in a sample of our patients, describe patients' strategies for management of thirst, and establish the necessary sample size for the planned intervention study. The pilot research project results showed that in a small sample of 20, there was a mildly positive, though not statistically significant, correlation between thirst distress and IDWG (r = 0.117). Subjects shared a wide variety of thirst management strategies including: limiting salt intake, using ice chips, measuring daily allotment, performing mouth care, eating raw fruits and vegetables, sucking on hard candy and chewing gum. This pilot research project showed that given an alpha of 0.05 and a power of 80%, we will require a sample of 39 subjects to detect a 20% change in IDWG. We will employ these results to plan our intervention study, first by establishing the appropriate sample size and second by incorporating identified patient strategies into an educational pamphlet that will form the basis of our intervention.

  15. Factors Affecting the Adoption of R&D Project Selection Techniques at the Air Force Wright Aeronautical Laboratories

    DTIC Science & Technology

    1988-09-01

    tested. To measure 42 the adequacy of the sample, the Kaiser - Meyer - Olkin measure of sampling adequacy was used. This technique is described in Factor...40 4- 0 - 7 0 0 07 -58d the relatively large number of variables, there was concern about the adequacy of the sample size. A Kaiser - Meyer - Olkin

  16. A Process Evaluation of Project Developmental Continuity, Interim Report VI: Recommendations for Continuing the Impact Study.

    ERIC Educational Resources Information Center

    Granville, Arthur; And Others

    This interim report re-examines data on instrument suitability, comparability of groups, and adequacy of sample size in Year III of the process evaluation of Project Developmental Continuity (PDC) and offers preliminary recommendations concerning the feasibility of continuing the impact study. PDC is a Head Start demonstration program aimed at…

  17. Particle size distribution characteristics of cotton gin battery condenser system total particulate emissions

    USDA-ARS?s Scientific Manuscript database

    This report is part of a project to characterize cotton gin emissions from the standpoint of total particulate stack sampling and particle size analyses. In 2013, EPA published a more stringent standard for particulate matter with nominal diameter less than or equal to 2.5 µm (PM2.5). This created a...

  18. Item Analysis Appropriate for Domain-Referenced Classroom Testing. (Project Technical Report Number 1).

    ERIC Educational Resources Information Center

    Nitko, Anthony J.; Hsu, Tse-chi

    Item analysis procedures appropriate for domain-referenced classroom testing are described. A conceptual framework within which item statistics can be considered and promising statistics in light of this framework are presented. The sampling fluctuations of the more promising item statistics for sample sizes comparable to the typical classroom…

  19. Fish Assemblage Structure Under Variable Environmental Conditions in the Ouachita Mountains

    Treesearch

    Christopher M. Taylor; Lance R. Williams; Riccardo A. Fiorillo; R. Brent Thomas; Melvin L. Warren

    2004-01-01

    Abstract - Spatial and temporal variability of fish assemblages in Ouachita Mountain streams, Arkansas, were examined for association with stream size and flow variability. Fishes and habitat were sampled quarterly for four years at 12 sites (144 samples) in the Ouachita Mountains Ecosystem Management Research Project, Phase III watersheds. Detrended...

  20. The effect of machine learning regression algorithms and sample size on individualized behavioral prediction with functional connectivity features.

    PubMed

    Cui, Zaixu; Gong, Gaolang

    2018-06-02

    Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Should particle size analysis data be combined with EPA approved sampling method data in the development of AP-42 emission factors?

    USDA-ARS?s Scientific Manuscript database

    A cotton ginning industry-supported project was initiated in 2008 and completed in 2013 to collect additional data for U.S. Environmental Protection Agency’s (EPA) Compilation of Air Pollution Emission Factors (AP-42) for PM10 and PM2.5. Stack emissions were collected using particle size distributio...

  2. Study design and sampling intensity for demographic analyses of bear populations

    USGS Publications Warehouse

    Harris, R.B.; Schwartz, C.C.; Mace, R.D.; Haroldson, M.A.

    2011-01-01

    The rate of population change through time (??) is a fundamental element of a wildlife population's conservation status, yet estimating it with acceptable precision for bears is difficult. For studies that follow known (usually marked) bears, ?? can be estimated during some defined time by applying either life-table or matrix projection methods to estimates of individual vital rates. Usually however, confidence intervals surrounding the estimate are broader than one would like. Using an estimator suggested by Doak et al. (2005), we explored the precision to be expected in ?? from demographic analyses of typical grizzly (Ursus arctos) and American black (U. americanus) bear data sets. We also evaluated some trade-offs among vital rates in sampling strategies. Confidence intervals around ?? were more sensitive to adding to the duration of a short (e.g., 3 yrs) than a long (e.g., 10 yrs) study, and more sensitive to adding additional bears to studies with small (e.g., 10 adult females/yr) than large (e.g., 30 adult females/yr) sample sizes. Confidence intervals of ?? projected using process-only variance of vital rates were only slightly smaller than those projected using total variances of vital rates. Under sampling constraints typical of most bear studies, it may be more efficient to invest additional resources into monitoring recruitment and juvenile survival rates of females already a part of the study, than to simply increase the sample size of study females. ?? 2011 International Association for Bear Research and Management.

  3. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR FIELD USE OF THE PARTICULATE SAMPLER (UA-F-3.1)

    EPA Science Inventory

    The purpose of this SOP is to describe the in-field use of the particulate sampling system (pumping, control unit, and size selective inlet impactors) for collecting samples of particulate matter from the air during a predetermined time period during the Arizona NHEXAS project an...

  4. The Lithological Constraint To Gas Hydrate Formation: Evidence OF Grain Size Of Sediments From IODP 311 On CASCADIA Margin

    NASA Astrophysics Data System (ADS)

    Wang, J.

    2006-12-01

    A total of 614 sediment samples at intervals of about 1.5 m from all 5 sites of the Integrated Ocean Drilling Program (IODP) Expedition 311 on Cascadia Margin were analyzed using a Beckman Coulter LS-230 Particle Analyzer. The grain-size data were then plotted in depth and compared with other proxies of gas hydrate- occurrence such as soupy/mousse-like structures in sediments, gas hydrate concentration (Sh) derived from LWD data using Archie's relation, IR core images (infrared image) and the recovered samples of gas hydrate¨Cbearing sediments. A good relationship between the distribution of coarse grains in size of 31-63¦Ìm and 63-125¦Ìm sediments and the potential occurrence of gas hydrate was found across the entire gas hydrate stability zone. The depth distribution of grain size from the Site U1326 shows clear excursions at depths of 5-8, 21-26, 50- 123, 132-140, 167-180, 195-206 and 220-240 mbsf, which coincide with the potential occurrence of gas hydrate suggested by soupy/mousse-like structures, logging-derived gas hydrate concentrations (Sh) and the recovered samples of the gas hydrate¨Cbearing sand layers. The lithology of sediments significantly affects the formation of gas hydrate. Gas hydrate forms preferentially within relatively coarse grain-size sediments above 31 ¦Ìm. Key words: grain size of sediments, constraint, occurrence of gas hydrate, IODP 311 IODP Expedition 311 Scientists: Michael Riedel (Co-chief Scientist), Timothy S. Collett (Co-chief Scientist), Mitchell Malone (Expedition Project Manager/Staff Scientist), Gilles Gu¨¨rin, Fumio Akiba, Marie-Madeleine Blanc-Valleron, Michelle Ellis, Yoshitaka Hashimoto, Verena Heuer, Yosuke Higashi, Melanie Holland, Peter D. Jackson, Masanori Kaneko, Miriam Kastner, Ji-Hoon Kim, Hiroko Kitajima, Philip E. Long, Alberto Malinverno, Greg Myers, Leena D. Palekar, John Pohlman, Peter Schultheiss, Barbara Teichert, Marta E. Torres, Anne M. Tr¨¦hu, Jiasheng Wang, Ulrich G. Wortmann, Hideyoshi Yoshioka. Acknowledgement: This study was supported by the IODP/JOI Alliance, IODP-China 863 Project (grant 2004AA615030) and NSFC Project (grant 40472063).

  5. Establishment and operation of the National Accident Sampling System (NASS) team within the cities of Ft. Lauderdale/Hollywood, Florida

    NASA Astrophysics Data System (ADS)

    Beddow, B.; Roberts, C.; Rankin, J.; Bloch, A.; Peizer, J.

    1981-01-01

    The National Accident Sampling System (NASS) is described. The study area discussed is one of the original ten sites selected for NASS implementation. In addition to collecting data from the field, the original ten sites address questions of feasibility of the plan, projected results of the data collection effort, and specific operational topics, e.g., team size, sampling requirements, training approaches, quality control procedures, and field techniques. Activities and results of the first three years of the project, for both major tasks (establishment and operation) are addressed. Topics include: study area documentation; team description, function and activities; problems and solutions; and recommendations.

  6. Sampling guidelines for oral fluid-based surveys of group-housed animals.

    PubMed

    Rotolo, Marisa L; Sun, Yaxuan; Wang, Chong; Giménez-Lirola, Luis; Baum, David H; Gauger, Phillip C; Harmon, Karen M; Hoogland, Marlin; Main, Rodger; Zimmerman, Jeffrey J

    2017-09-01

    Formulas and software for calculating sample size for surveys based on individual animal samples are readily available. However, sample size formulas are not available for oral fluids and other aggregate samples that are increasingly used in production settings. Therefore, the objective of this study was to develop sampling guidelines for oral fluid-based porcine reproductive and respiratory syndrome virus (PRRSV) surveys in commercial swine farms. Oral fluid samples were collected in 9 weekly samplings from all pens in 3 barns on one production site beginning shortly after placement of weaned pigs. Samples (n=972) were tested by real-time reverse-transcription PCR (RT-rtPCR) and the binary results analyzed using a piecewise exponential survival model for interval-censored, time-to-event data with misclassification. Thereafter, simulation studies were used to study the barn-level probability of PRRSV detection as a function of sample size, sample allocation (simple random sampling vs fixed spatial sampling), assay diagnostic sensitivity and specificity, and pen-level prevalence. These studies provided estimates of the probability of detection by sample size and within-barn prevalence. Detection using fixed spatial sampling was as good as, or better than, simple random sampling. Sampling multiple barns on a site increased the probability of detection with the number of barns sampled. These results are relevant to PRRSV control or elimination projects at the herd, regional, or national levels, but the results are also broadly applicable to contagious pathogens of swine for which oral fluid tests of equivalent performance are available. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Results of a Pilot Study to Ameliorate Psychological and Behavioral Outcomes of Minority Stress Among Young Gay and Bisexual Men.

    PubMed

    Smith, Nathan Grant; Hart, Trevor A; Kidwai, Ammaar; Vernon, Julia R G; Blais, Martin; Adam, Barry

    2017-09-01

    Project PRIDE (Promoting Resilience In Discriminatory Environments) is an 8-session small group intervention aimed at reducing negative mental and behavioral health outcomes resulting from minority stress. This study reports the results of a one-armed pilot test of Project PRIDE, which aimed to examine the feasibility and potential for efficacy of the intervention in a sample of 33 gay and bisexual men aged 18 to 25. The intervention appeared feasible to administer in two different sites and all participants who completed posttreatment (n = 22) or follow-up (n = 19) assessments reported high satisfaction with the intervention. Small to large effect sizes were observed for increases in self-esteem; small effect sizes were found for decreases in loneliness and decreases in minority stress variables; and small and medium effect sizes were found for reductions in alcohol use and number of sex partners, respectively. Overall, Project PRIDE appears to be a feasible intervention with promise of efficacy. Copyright © 2017. Published by Elsevier Ltd.

  8. A log-linear model approach to estimation of population size using the line-transect sampling method

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Crain, B.R.

    1978-01-01

    The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.

  9. Enhancement of students’ creative thinking skills on mixture separation topic using project based student worksheet

    NASA Astrophysics Data System (ADS)

    Nurisalfah, R.; Fadiawati, N.; Jalmo, T.

    2018-05-01

    The aim of this study is to describe the effectiveness of project based student worksheet in improving students' creative thinking skills. The research method is using quasi experiment with the matching only pre-test post-test control group design. The population in this research is all students of class VII SMP N 2 Belitang Madang Raya with class VII1 as control class and class VII4 as experiment class. The sample of this research is obtaining by purposive sampling technique. The effectiveness of project based student worksheet is based on significant post-test differences between the control class and the experiment class as well as the effect size. The results show that the using of project based student worksheet is effective in improving students' creative thinking skills on mixture separation topic.

  10. Eddy Covariance Measurements of the Sea-Spray Aerosol Flu

    NASA Astrophysics Data System (ADS)

    Brooks, I. M.; Norris, S. J.; Yelland, M. J.; Pascal, R. W.; Prytherch, J.

    2015-12-01

    Historically, almost all estimates of the sea-spray aerosol source flux have been inferred through various indirect methods. Direct estimates via eddy covariance have been attempted by only a handful of studies, most of which measured only the total number flux, or achieved rather coarse size segregation. Applying eddy covariance to the measurement of sea-spray fluxes is challenging: most instrumentation must be located in a laboratory space requiring long sample lines to an inlet collocated with a sonic anemometer; however, larger particles are easily lost to the walls of the sample line. Marine particle concentrations are generally low, requiring a high sample volume to achieve adequate statistics. The highly hygroscopic nature of sea salt means particles change size rapidly with fluctuations in relative humidity; this introduces an apparent bias in flux measurements if particles are sized at ambient humidity. The Compact Lightweight Aerosol Spectrometer Probe (CLASP) was developed specifically to make high rate measurements of aerosol size distributions for use in eddy covariance measurements, and the instrument and data processing and analysis techniques have been refined over the course of several projects. Here we will review some of the issues and limitations related to making eddy covariance measurements of the sea spray source flux over the open ocean, summarise some key results from the last decade, and present new results from a 3-year long ship-based measurement campaign as part of the WAGES project. Finally we will consider requirements for future progress.

  11. Overview of the Mars Sample Return Earth Entry Vehicle

    NASA Technical Reports Server (NTRS)

    Dillman, Robert; Corliss, James

    2008-01-01

    NASA's Mars Sample Return (MSR) project will bring Mars surface and atmosphere samples back to Earth for detailed examination. Langley Research Center's MSR Earth Entry Vehicle (EEV) is a core part of the mission, protecting the sample container during atmospheric entry, descent, and landing. Planetary protection requirements demand a higher reliability from the EEV than for any previous planetary entry vehicle. An overview of the EEV design and preliminary analysis is presented, with a follow-on discussion of recommended future design trade studies to be performed over the next several years in support of an MSR launch in 2018 or 2020. Planned topics include vehicle size for impact protection of a range of sample container sizes, outer mold line changes to achieve surface sterilization during re-entry, micrometeoroid protection, aerodynamic stability, thermal protection, and structural materials selection.

  12. Differential scanning calorimetry of coal

    NASA Technical Reports Server (NTRS)

    Gold, P. I.

    1978-01-01

    Differential scanning calorimetry studies performed during the first year of this project demonstrated the occurrence of exothermic reactions associated with the production of volatile matter in or near the plastic region. The temperature and magnitude of the exothermic peak were observed to be strongly affected by the heating rate, sample mass and, to a lesser extent, by sample particle size. Thermal properties also were found to be influenced by oxidation of the coal sample due to weathering effects.

  13. Identification of missing variants by combining multiple analytic pipelines.

    PubMed

    Ren, Yingxue; Reddy, Joseph S; Pottier, Cyril; Sarangi, Vivekananda; Tian, Shulan; Sinnwell, Jason P; McDonnell, Shannon K; Biernacka, Joanna M; Carrasquillo, Minerva M; Ross, Owen A; Ertekin-Taner, Nilüfer; Rademakers, Rosa; Hudson, Matthew; Mainzer, Liudmila Sergeevna; Asmann, Yan W

    2018-04-16

    After decades of identifying risk factors using array-based genome-wide association studies (GWAS), genetic research of complex diseases has shifted to sequencing-based rare variants discovery. This requires large sample sizes for statistical power and has brought up questions about whether the current variant calling practices are adequate for large cohorts. It is well-known that there are discrepancies between variants called by different pipelines, and that using a single pipeline always misses true variants exclusively identifiable by other pipelines. Nonetheless, it is common practice today to call variants by one pipeline due to computational cost and assume that false negative calls are a small percent of total. We analyzed 10,000 exomes from the Alzheimer's Disease Sequencing Project (ADSP) using multiple analytic pipelines consisting of different read aligners and variant calling strategies. We compared variants identified by using two aligners in 50,100, 200, 500, 1000, and 1952 samples; and compared variants identified by adding single-sample genotyping to the default multi-sample joint genotyping in 50,100, 500, 2000, 5000 and 10,000 samples. We found that using a single pipeline missed increasing numbers of high-quality variants correlated with sample sizes. By combining two read aligners and two variant calling strategies, we rescued 30% of pass-QC variants at sample size of 2000, and 56% at 10,000 samples. The rescued variants had higher proportions of low frequency (minor allele frequency [MAF] 1-5%) and rare (MAF < 1%) variants, which are the very type of variants of interest. In 660 Alzheimer's disease cases with earlier onset ages of ≤65, 4 out of 13 (31%) previously-published rare pathogenic and protective mutations in APP, PSEN1, and PSEN2 genes were undetected by the default one-pipeline approach but recovered by the multi-pipeline approach. Identification of the complete variant set from sequencing data is the prerequisite of genetic association analyses. The current analytic practice of calling genetic variants from sequencing data using a single bioinformatics pipeline is no longer adequate with the increasingly large projects. The number and percentage of quality variants that passed quality filters but are missed by the one-pipeline approach rapidly increased with sample size.

  14. Improving the Selection, Classification, and Utilization of Army Enlisted Personnel. Project A: Research Plan

    DTIC Science & Technology

    1983-05-01

    occur. 4) It is also true that during a given time period, at a given base, not all of the people in the sample will actually be available for testing...taken sample sizes into consideration, we currently estimate that with few exceptions, we will have adequate samples to perform the analysis of simple ...aalanced Half Sample Repli- cations (BHSA). His analyses of simple cases have shown that this method is substantially more efficient than the

  15. Estimating the Size of the Methamphetamine-Using Population in New York City Using Network Sampling Techniques.

    PubMed

    Dombrowski, Kirk; Khan, Bilal; Wendel, Travis; McLean, Katherine; Misshula, Evan; Curtis, Ric

    2012-12-01

    As part of a recent study of the dynamics of the retail market for methamphetamine use in New York City, we used network sampling methods to estimate the size of the total networked population. This process involved sampling from respondents' list of co-use contacts, which in turn became the basis for capture-recapture estimation. Recapture sampling was based on links to other respondents derived from demographic and "telefunken" matching procedures-the latter being an anonymized version of telephone number matching. This paper describes the matching process used to discover the links between the solicited contacts and project respondents, the capture-recapture calculation, the estimation of "false matches", and the development of confidence intervals for the final population estimates. A final population of 12,229 was estimated, with a range of 8235 - 23,750. The techniques described here have the special virtue of deriving an estimate for a hidden population while retaining respondent anonymity and the anonymity of network alters, but likely require larger sample size than the 132 persons interviewed to attain acceptable confidence levels for the estimate.

  16. Multipinhole SPECT helical scan parameters and imaging volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less

  17. Trends in Selecting Undergraduate Business Majors & International Enrollment & Expected Salaries

    ERIC Educational Resources Information Center

    Ozgur, Ceyhun; Li, Yang; Rogers, Grace

    2015-01-01

    The paper begins with a brief review of the literature and how business students choose their major in the U.S. and we list the most popular majors in the U.S. Universities. We also talk about the factors that influenced student's choice. In our next research project, we will not only use a larger sample size but also the sample will come from a…

  18. A proposed coast-wide reference monitoring system for evaluating Wetland restoration trajectories in Louisiana

    USGS Publications Warehouse

    Steyer, G.D.; Sasser, C.E.; Visser, J.M.; Swenson, E.M.; Nyman, J.A.; Raynie, R.C.

    2003-01-01

    Wetland restoration efforts conducted in Louisiana under the Coastal Wetlands Planning, Protection and Restoration Act require monitoring the effectiveness of individual projects as well as monitoring the cumulative effects of all projects in restoring, creating, enhancing, and protecting the coastal landscape. The effectiveness of the traditional paired-reference monitoring approach in Louisiana has been limited because of difficulty in finding comparable reference sites. A multiple reference approach is proposed that uses aspects of hydrogeomorphic functional assessments and probabilistic sampling. This approach includes a suite of sites that encompass the range of ecological condition for each stratum, with projects placed on a continuum of conditions found for that stratum. Trajectories in reference sites through time are then compared with project trajectories through time. Plant community zonation complicated selection of indicators, strata, and sample size. The approach proposed could serve as a model for evaluating wetland ecosystems.

  19. A proposed coast-wide reference monitoring system for evaluating wetland restoration trajectories in Louisiana.

    PubMed

    Steyer, Gregory D; Sasser, Charles E; Visser, Jenneke M; Swenson, Erick M; Nyman, John A; Raynie, Richard C

    2003-01-01

    Wetland restoration efforts conducted in Louisiana under the Coastal Wetlands Planning, Protection and Restoration Act require monitoring the effectiveness of individual projects as well as monitoring the cumulative effects of all projects in restoring, creating, enhancing, and protecting the coastal landscape. The effectiveness of the traditional paired-reference monitoring approach in Louisiana has been limited because of difficulty in finding comparable reference sites. A multiple reference approach is proposed that uses aspects of hydrogeomorphic functional assessments and probabilistic sampling. This approach includes a suite of sites that encompass the range of ecological condition for each stratum, with projects placed on a continuum of conditions found for that stratum. Trajectories in reference sites through time are then compared with project trajectories through time. Plant community zonation complicated selection of indicators, strata, and sample size. The approach proposed could serve as a model for evaluating wetland ecosystems.

  20. Dependence of Some Properties of Groups on Group Local Number Density

    NASA Astrophysics Data System (ADS)

    Deng, Xin-Fa; Wu, Ping

    2014-09-01

    In this study we investigate the dependence of projected size Sizesky, and rms deviation σR of projected distance in the sky from the group center, rms velocities σV , and virial radius RVir of groups on group local number density. In the volume-limited group samples, it is found that groups in high density regions preferentially have larger Sizesky, σR , σV , and RVir than ones in low density regions.

  1. Determination of Minimum Training Sample Size for Microarray-Based Cancer Outcome Prediction–An Empirical Assessment

    PubMed Central

    Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu

    2013-01-01

    The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920

  2. Sequencing the Earliest Stages of Active Galactic Nuclei Development Using The Youngest Radio Sources

    NASA Astrophysics Data System (ADS)

    Collier, Jordan; Filipovic, Miroslav; Norris, Ray; Chow, Kate; Huynh, Minh; Banfield, Julie; Tothill, Nick; Sirothia, Sandeep Kumar; Shabala, Stanislav

    2014-04-01

    This proposal is a continuation of an extensive project (the core of Collier's PhD) to explore the earliest stages of AGN formation, using Gigahertz-Peaked Spectrum (GPS) and Compact Steep Spectrum (CSS) sources. Both are widely believed to represent the earliest stages of radio-loud AGN evolution, with GPS sources preceding CSS sources. In this project, we plan to (a) test this hypothesis, (b) place GPS and CSS sources into an evolutionary sequence with a number of other young AGN candidates, and (c) search for evidence of the evolving accretion mode. We will do this using high-resolution radio observations, with a number of other multiwavelength age indicators, of a carefully selected complete faint sample of 80 GPS/CSS sources. Analysis of the C2730 ELAIS-S1 data shows that we have so far met our goals, resolving the jets of 10/49 sources, and measuring accurate spectral indices from 0.843-10 GHz. This particular proposal is to almost triple the sample size by observing an additional 80 GPS/CSS sources in the Chandra Deep Field South (arguably the best-studied field) and allow a turnover frequency - linear size relation to be derived at >10-sigma. Sources found to be unresolved in our final sample will subsequently be observed with VLBI. Comparing those sources resolved with ATCA to the more compact sources resolved with VLBI will give a distribution of source sizes, helping to answer the question of whether all GPS/CSS sources grow to larger sizes.

  3. Modified dough preparation for Alveograph analysis with limited flour sample size

    USDA-ARS?s Scientific Manuscript database

    Dough rheological characteristics, such as resistance-to-extension and extensibility, obtained by alveograph testing are important traits for determination of wheat and flour quality. A challenging issue that faces wheat breeding programs and some wheat-research projects is the relatively large flou...

  4. 77 FR 70780 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-27

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... notice. Proposed Project Generic Clearance for the Collection of Qualitative Feedback on Agency Service... Collection of Qualitative Feedback on Agency Service Delivery'' to OMB for approval under the Paperwork...

  5. 77 FR 27062 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-08

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential... Project NIOSH Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery--NEW... Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency...

  6. Sex Work Research: Methodological and Ethical Challenges

    ERIC Educational Resources Information Center

    Shaver, Frances M.

    2005-01-01

    The challenges involved in the design of ethical, nonexploitative research projects with sex workers or any other marginalized population are significant. First, the size and boundaries of the population are unknown, making it extremely difficult to get a representative sample. Second, because membership in hidden populations often involves…

  7. Approximated affine projection algorithm for feedback cancellation in hearing aids.

    PubMed

    Lee, Sangmin; Kim, In-Young; Park, Young-Cheol

    2007-09-01

    We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.

  8. Projection x-ray topography system at 1-BM x-ray optics test beamline at the advanced photon source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoupin, Stanislav, E-mail: sstoupin@aps.anl.gov; Liu, Zunping; Trakhtenberg, Emil

    2016-07-27

    Projection X-ray topography of single crystals is a classic technique for the evaluation of intrinsic crystal quality of large crystals. In this technique a crystal sample and an area detector (e.g., X-ray film) collecting intensity of a chosen crystallographic reflection are translated simultaneously across an X-ray beam collimated in the diffraction scattering plane (e.g., [1, 2]). A bending magnet beamline of a third-generation synchrotron source delivering x-ray beam with a large horizontal divergence, and therefore, a large horizontal beam size at a crystal sample position offers an opportunity to obtain X-ray topographs of large crystalline samples (e.g., 6-inch wafers) inmore » just a few exposures. Here we report projection X-ray topography system implemented recently at 1-BM beamline of the Advanced Photon Source. A selected X-ray topograph of a 6-inch wafer of 4H-SiC illustrates capabilities and limitations of the technique.« less

  9. The Effects of Popping Popcorn Under Reduced Pressure

    NASA Astrophysics Data System (ADS)

    Quinn, Paul; Cooper, Amanda

    2008-03-01

    In our experiments, we model the popping of popcorn as an adiabatic process and develop a process for improving the efficiency of popcorn production. By lowering the pressure of the popcorn during the popping process, we induce an increase in popcorn size, while decreasing the number of remaining unpopped kernels. In this project we run numerous experiments using three of the most common popping devices, a movie popcorn maker, a stove pot, and a microwave. We specifically examine the effects of varying the pressure on total sample size, flake size and waste. An empirical relationship is found between these variables and the pressure.

  10. Multiple-User Microcomputer Technology and Its Application to the Library Environment.

    ERIC Educational Resources Information Center

    McCarthy, Cathleen D.

    1987-01-01

    Demonstrates the ways in which multiuser and multitasking microcomputer systems can be used for the automation of small- to medium-sized library operations. The possibilities afforded by the IBM-PC AT microcomputer are discussed and a sample configuration with estimated cost projections is provided. (EM)

  11. A Review of ETS Differential Item Functioning Assessment Procedures: Flagging Rules, Minimum Sample Size Requirements, and Criterion Refinement. Research Report. ETS RR-12-08

    ERIC Educational Resources Information Center

    Zwick, Rebecca

    2012-01-01

    Differential item functioning (DIF) analysis is a key component in the evaluation of the fairness and validity of educational tests. The goal of this project was to review the status of ETS DIF analysis procedures, focusing on three aspects: (a) the nature and stringency of the statistical rules used to flag items, (b) the minimum sample size…

  12. A microbial survey of the International Space Station (ISS)

    PubMed Central

    Lang, Jenna M.; Coil, David A.; Neches, Russell Y.; Brown, Wendy E.; Cavalier, Darlene; Severance, Mark; Hampton-Marcell, Jarrad T.; Gilbert, Jack A.

    2017-01-01

    Background Modern advances in sequencing technology have enabled the census of microbial members of many natural ecosystems. Recently, attention is increasingly being paid to the microbial residents of human-made, built ecosystems, both private (homes) and public (subways, office buildings, and hospitals). Here, we report results of the characterization of the microbial ecology of a singular built environment, the International Space Station (ISS). This ISS sampling involved the collection and microbial analysis (via 16S rDNA PCR) of 15 surfaces sampled by swabs onboard the ISS. This sampling was a component of Project MERCCURI (Microbial Ecology Research Combining Citizen and University Researchers on ISS). Learning more about the microbial inhabitants of the “buildings” in which we travel through space will take on increasing importance, as plans for human exploration continue, with the possibility of colonization of other planets and moons. Results Sterile swabs were used to sample 15 surfaces onboard the ISS. The sites sampled were designed to be analogous to samples collected for (1) the Wildlife of Our Homes project and (2) a study of cell phones and shoes that were concurrently being collected for another component of Project MERCCURI. Sequencing of the 16S rDNA genes amplified from DNA extracted from each swab was used to produce a census of the microbes present on each surface sampled. We compared the microbes found on the ISS swabs to those from both homes on Earth and data from the Human Microbiome Project. Conclusions While significantly different from homes on Earth and the Human Microbiome Project samples analyzed here, the microbial community composition on the ISS was more similar to home surfaces than to the human microbiome samples. The ISS surfaces are species-rich with 1,036–4,294 operational taxonomic units (OTUs per sample). There was no discernible biogeography of microbes on the 15 ISS surfaces, although this may be a reflection of the small sample size we were able to obtain. PMID:29492330

  13. A microbial survey of the International Space Station (ISS).

    PubMed

    Lang, Jenna M; Coil, David A; Neches, Russell Y; Brown, Wendy E; Cavalier, Darlene; Severance, Mark; Hampton-Marcell, Jarrad T; Gilbert, Jack A; Eisen, Jonathan A

    2017-01-01

    Modern advances in sequencing technology have enabled the census of microbial members of many natural ecosystems. Recently, attention is increasingly being paid to the microbial residents of human-made, built ecosystems, both private (homes) and public (subways, office buildings, and hospitals). Here, we report results of the characterization of the microbial ecology of a singular built environment, the International Space Station (ISS). This ISS sampling involved the collection and microbial analysis (via 16S rDNA PCR) of 15 surfaces sampled by swabs onboard the ISS. This sampling was a component of Project MERCCURI (Microbial Ecology Research Combining Citizen and University Researchers on ISS). Learning more about the microbial inhabitants of the "buildings" in which we travel through space will take on increasing importance, as plans for human exploration continue, with the possibility of colonization of other planets and moons. Sterile swabs were used to sample 15 surfaces onboard the ISS. The sites sampled were designed to be analogous to samples collected for (1) the Wildlife of Our Homes project and (2) a study of cell phones and shoes that were concurrently being collected for another component of Project MERCCURI. Sequencing of the 16S rDNA genes amplified from DNA extracted from each swab was used to produce a census of the microbes present on each surface sampled. We compared the microbes found on the ISS swabs to those from both homes on Earth and data from the Human Microbiome Project. While significantly different from homes on Earth and the Human Microbiome Project samples analyzed here, the microbial community composition on the ISS was more similar to home surfaces than to the human microbiome samples. The ISS surfaces are species-rich with 1,036-4,294 operational taxonomic units (OTUs per sample). There was no discernible biogeography of microbes on the 15 ISS surfaces, although this may be a reflection of the small sample size we were able to obtain.

  14. Hyper-spectrum scanning laser optical tomography

    NASA Astrophysics Data System (ADS)

    Chen, Lingling; Li, Guiye; Li, Yingchao; Liu, Lina; Liu, Ang; Hu, Xuejuan; Ruan, Shuangchen

    2018-02-01

    We describe a quantitative fluorescence projection tomography technique which measures the three-dimensional fluorescence spectrum in biomedical samples with size up to several millimeters. This is achieved by acquiring a series of hyperspectral images, by using laser scanning scheme, at different projection angles. We demonstrate that this technique provide a quantitative measure of the fluorescence signal by comparing the spectrum and intensity profile of a fluorescent bead phantom and also demonstrate its application to differentiating the extrinsic label and the autofluorescence in a mouse embryo.

  15. Improving image quality in laboratory x-ray phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    De Marco, F.; Marschner, M.; Birnbacher, L.; Viermetz, M.; Noël, P.; Herzen, J.; Pfeiffer, F.

    2017-03-01

    Grating-based X-ray phase-contrast (gbPC) is known to provide significant benefits for biomedical imaging. To investigate these benefits, a high-sensitivity gbPC micro-CT setup for small (≍ 5 cm) biological samples has been constructed. Unfortunately, high differential-phase sensitivity leads to an increased magnitude of data processing artifacts, limiting the quality of tomographic reconstructions. Most importantly, processing of phase-stepping data with incorrect stepping positions can introduce artifacts resembling Moiré fringes to the projections. Additionally, the focal spot size of the X-ray source limits resolution of tomograms. Here we present a set of algorithms to minimize artifacts, increase resolution and improve visual impression of projections and tomograms from the examined setup. We assessed two algorithms for artifact reduction: Firstly, a correction algorithm exploiting correlations of the artifacts and differential-phase data was developed and tested. Artifacts were reliably removed without compromising image data. Secondly, we implemented a new algorithm for flatfield selection, which was shown to exclude flat-fields with strong artifacts. Both procedures successfully improved image quality of projections and tomograms. Deconvolution of all projections of a CT scan can minimize blurring introduced by the finite size of the X-ray source focal spot. Application of the Richardson-Lucy deconvolution algorithm to gbPC-CT projections resulted in an improved resolution of phase-contrast tomograms. Additionally, we found that nearest-neighbor interpolation of projections can improve the visual impression of very small features in phase-contrast tomograms. In conclusion, we achieved an increase in image resolution and quality for the investigated setup, which may lead to an improved detection of very small sample features, thereby maximizing the setup's utility.

  16. Evaluation of dredged material proposed for ocean disposal from Bronx River Project Area, New York

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruendell, B.D.; Gardiner, W.W.; Antrim, L.D.

    1996-12-01

    The objective of the Bronx River project was to evaluate proposed dredged material from the Bronx River project area in Bronx, New York, to determine its suitability for unconfined ocean disposal at the Mud Dump Site. Bronx River was one of five waterways that the US Army Corps of Engineers-New York District (USAGE-NYD) requested the Battelle Marine Sciences Laboratory (MSL) to sample and to evaluate for dredging and disposal. Sediment samples were submitted for physical and chemical analyses, chemical analyses of dredging site water and elutriate, benthic and water-column acute toxicity tests, and bioaccumulation studies. Fifteen individual sediment core samplesmore » collected from the Bronx River project area were analyzed for grain size, moisture content, and total organic carbon (TOC). One composite sediment sample, representing the entire reach of the area proposed for dredging, was analyzed for bulk density, specific gravity, metals, chlorinated pesticides, polychlorinated biphenyl (PCB) congeners, polynuclear aromatic hydrocarbons (PAH), and 1,4- dichlorobenzene. Dredging site water and elutriate water, which was prepared from the suspended-particulate phase (SPP) of the Bronx River sediment composite, were analyzed for metals, pesticides, and PCBS.« less

  17. Automatic three-dimensional tracking of particles with high-numerical-aperture digital lensless holographic microscopy.

    PubMed

    Restrepo, John F; Garcia-Sucerquia, Jorge

    2012-02-15

    We present an automatic procedure for 3D tracking of micrometer-sized particles with high-NA digital lensless holographic microscopy. The method uses a two-feature approach to search for the best focal planes and to distinguish particles from artifacts or other elements on the reconstructed stream of the holograms. A set of reconstructed images is axially projected onto a single image. From the projected image, the centers of mass of all the reconstructed elements are identified. Starting from the centers of mass, the morphology of the profile of the maximum intensity along the reconstruction direction allows for the distinguishing of particles from others elements. The method is tested with modeled holograms and applied to automatically track micrometer-sized bubbles in a sample of 4 mm3 of soda.

  18. Strategic Planning for Library Multitype Cooperatives: Samples & Examples. ASCLA Changing Horizons Series #1.

    ERIC Educational Resources Information Center

    Baughman, Steven A., Ed.; Curry, Elizabeth A., Ed.

    As interlibrary cooperation has proliferated in the last several decades, multitype library organizations and systems have emerged as important forces in librarianship. The need for thoughtful and organized strategic planning is an important cornerstone for the success of organizations of all sizes. Part of a project by the Interlibrary…

  19. Evaluation of a High-Resolution Benchtop Micro-CT Scanner for Application in Porous Media Research

    NASA Astrophysics Data System (ADS)

    Tuller, M.; Vaz, C. M.; Lasso, P. O.; Kulkarni, R.; Ferre, T. A.

    2010-12-01

    Recent advances in Micro Computed Tomography (MCT) provided the motivation to thoroughly evaluate and optimize scanning, image reconstruction/segmentation and pore-space analysis capabilities of a new generation benchtop MCT scanner and associated software package. To demonstrate applicability to soil research the project was focused on determination of porosities and pore size distributions of two Brazilian Oxisols from segmented MCT-data. Effects of metal filters and various acquisition parameters (e.g. total rotation, rotation step, and radiograph frame averaging) on image quality and acquisition time are evaluated. Impacts of sample size and scanning resolution on CT-derived porosities and pore-size distributions are illustrated.

  20. Soil data from Picea mariana stands near delta junction, Alaska of different ages and soil drainage type

    USGS Publications Warehouse

    Manies, Kristen L.; Harden, Jennifer W.; Silva, Steven R.; Briggs, Paul H.; Schmid, Brian M.

    2004-01-01

    The U.S. Geological Survey project Fate of Carbon in Alaskan Landscapes (FOCAL) is studying the effect of fire and soil drainage on soil carbon storage in the boreal forest. This project has selected several sites to study within central Alaska of varying ages (time since fire) and soil drainage types. This report describes the location of these sampling sites, as well as the procedures used to describe, sample, and analyze the soils. This report also contains data tables with this information, including, but not limited to field descriptions, bulk density, particle size distribution, moisture content, carbon (C) concentration, nitrogen (N) concentration, isotopic data for C, and major, minor and trace elemental concentration.

  1. AutoGNI, the Robot Under the Aircraft Floor: An Automated System for Sampling Giant Aerosol Particles by Impaction in the Free Airstream Outside a Research Aircraft

    NASA Astrophysics Data System (ADS)

    Jensen, J. B.; Schwenz, K.; Aquino, J.; Carnes, J.; Webster, C.; Munnerlyn, J.; Wissman, T.; Lugger, T.

    2017-12-01

    Giant sea-salt aerosol particles, also called Giant Cloud Condensation Nuclei (GCCN), have been proposed as a means of rapidly forming precipitation sized drizzle drops in warm marine clouds (e.g., Jensen and Nugent, 2017). Such rare particles are best sampled from aircraft in air below cloud base, where normal laser optical instruments have too low sample volume to give statistically significant samples of the large particle tail. An automated sampling system (the AutoGNI) has been built to operate from inside a pressurized aircraft. Under the aircraft floor, a pressurized vessel contains 32 custom-built polycarbonate microscope slides. Using robotics with 5 motor drives and 18 positioning switches, the AutoGNI can take slides from their holding cassettes, pass them onto a caddy in an airfoil that extends 200 mm outside the aircraft, where they are exposed in the free airstream, thus avoiding the usual problems with large particle losses in air intakes. Slides are typically exposed for 10-30 s in the marine boundary layer, giving sample volumes of about 100-300 L or more. Subsequently the slides are retracted into the pressure vessel, stored and transported for laboratory microscope image analysis, in order to derive size-distribution histograms. While the aircraft is flying, the AutoGNI system is remotely controlled from a laptop on the ground, using an encrypted commercial satellite connection to the NSF/NCAR GV research aircraft's main server, and onto the AutoGNI microprocessor. The sampling of such GCCN is becoming increasingly important in order to provide complete input data for model calculations of aerosol-cloud interactions and their feedbacks in climate prediction. The AutoGNI has so far been sampling sea-salt GCCN in the Magellan Straight during the 2016 ORCAS project and over the NW Pacific during the 2017 ARISTO project, both from the NSF/NCAR GV research aircraft. Sea-salt particle sizes of 1.4 - 32 μm dry diameter have been observed.

  2. The Mars Orbital Catalog of Hydrated Alteration Signatures (MOCHAS) - Initial release

    NASA Astrophysics Data System (ADS)

    Carter, John; OMEGA and CRISM Teams

    2016-10-01

    Aqueous minerals have been identified from orbit at a number of localities, and their analysis allowed refining the water story of Early Mars. They are also a main science driver when selecting current and upcoming landing sites for roving missions.Available catalogs of mineral detections exhibit a number of drawbacks such as a limited sample size (a thousand sites at most), inhomogeneous sampling of the surface and of the investigation methods, and the lack of contextual information (e.g. spatial extent, morphological context). The MOCHAS project strives to address such limitations by providing a global, detailed survey of aqueous minerals on Mars based on 10 years of data from the OMEGA and CRISM imaging spectrometers. Contextual data is provided, including deposit sizes, morphology and detailed composition when available. Sampling biases are also addressed.It will be openly distributed in GIS-ready format and will be participative. For example, it will be possible for researchers to submit requests for specific mapping of regions of interest, or add/refine mineral detections.An initial release is scheduled in Fall 2016 and will feature a two orders of magnitude increase in sample size compared to previous studies.

  3. Evaluating information content of SNPs for sample-tagging in re-sequencing projects.

    PubMed

    Hu, Hao; Liu, Xiang; Jin, Wenfei; Hilger Ropers, H; Wienker, Thomas F

    2015-05-15

    Sample-tagging is designed for identification of accidental sample mix-up, which is a major issue in re-sequencing studies. In this work, we develop a model to measure the information content of SNPs, so that we can optimize a panel of SNPs that approach the maximal information for discrimination. The analysis shows that as low as 60 optimized SNPs can differentiate the individuals in a population as large as the present world, and only 30 optimized SNPs are in practice sufficient in labeling up to 100 thousand individuals. In the simulated populations of 100 thousand individuals, the average Hamming distances, generated by the optimized set of 30 SNPs are larger than 18, and the duality frequency, is lower than 1 in 10 thousand. This strategy of sample discrimination is proved robust in large sample size and different datasets. The optimized sets of SNPs are designed for Whole Exome Sequencing, and a program is provided for SNP selection, allowing for customized SNP numbers and interested genes. The sample-tagging plan based on this framework will improve re-sequencing projects in terms of reliability and cost-effectiveness.

  4. The functional spectrum of low-frequency coding variation.

    PubMed

    Marth, Gabor T; Yu, Fuli; Indap, Amit R; Garimella, Kiran; Gravel, Simon; Leong, Wen Fung; Tyler-Smith, Chris; Bainbridge, Matthew; Blackwell, Tom; Zheng-Bradley, Xiangqun; Chen, Yuan; Challis, Danny; Clarke, Laura; Ball, Edward V; Cibulskis, Kristian; Cooper, David N; Fulton, Bob; Hartl, Chris; Koboldt, Dan; Muzny, Donna; Smith, Richard; Sougnez, Carrie; Stewart, Chip; Ward, Alistair; Yu, Jin; Xue, Yali; Altshuler, David; Bustamante, Carlos D; Clark, Andrew G; Daly, Mark; DePristo, Mark; Flicek, Paul; Gabriel, Stacey; Mardis, Elaine; Palotie, Aarno; Gibbs, Richard

    2011-09-14

    Rare coding variants constitute an important class of human genetic variation, but are underrepresented in current databases that are based on small population samples. Recent studies show that variants altering amino acid sequence and protein function are enriched at low variant allele frequency, 2 to 5%, but because of insufficient sample size it is not clear if the same trend holds for rare variants below 1% allele frequency. The 1000 Genomes Exon Pilot Project has collected deep-coverage exon-capture data in roughly 1,000 human genes, for nearly 700 samples. Although medical whole-exome projects are currently afoot, this is still the deepest reported sampling of a large number of human genes with next-generation technologies. According to the goals of the 1000 Genomes Project, we created effective informatics pipelines to process and analyze the data, and discovered 12,758 exonic SNPs, 70% of them novel, and 74% below 1% allele frequency in the seven population samples we examined. Our analysis confirms that coding variants below 1% allele frequency show increased population-specificity and are enriched for functional variants. This study represents a large step toward detecting and interpreting low frequency coding variation, clearly lays out technical steps for effective analysis of DNA capture data, and articulates functional and population properties of this important class of genetic variation.

  5. Soil Data from a Moderately Well and Somewhat Poorly Drained Fire Chronosequence near Thompson, Manitoba, Canada

    USGS Publications Warehouse

    Manies, K.L.; Harden, J.W.; Veldhuis, Hugo; Trumbore, Sue

    2006-01-01

    The U.S. Geological Survey project Fate of Carbon in Alaskan Landscapes (FOCAL) is studying the effect of fire and soil drainage on soil carbon storage in the boreal forest. As such this group was invited to be a part of a NSF-funded project (Fire, Ecosystem and Succession - Experiment Boreal or FIRES-ExB) to study the carbon balance of sites that varied in age (time since fire) and soil drainage in the Thompson, Manitoba, Canada region. This report describes the location of our FIRES-ExB sampling sites as well as the procedures used to describe, sample, and analyze the soils. This report also contains data tables with sample related information including, but not limited to, field descriptions, bulk density, particle size distribution, moisture content, carbon (C) concentration, nitrogen (N) concentration, isotopic data for C, and major, minor and trace elemental concentration.

  6. Lipid Vesicle Shape Analysis from Populations Using Light Video Microscopy and Computer Vision

    PubMed Central

    Zupanc, Jernej; Drašler, Barbara; Boljte, Sabina; Kralj-Iglič, Veronika; Iglič, Aleš; Erdogmus, Deniz; Drobne, Damjana

    2014-01-01

    We present a method for giant lipid vesicle shape analysis that combines manually guided large-scale video microscopy and computer vision algorithms to enable analyzing vesicle populations. The method retains the benefits of light microscopy and enables non-destructive analysis of vesicles from suspensions containing up to several thousands of lipid vesicles (1–50 µm in diameter). For each sample, image analysis was employed to extract data on vesicle quantity and size distributions of their projected diameters and isoperimetric quotients (measure of contour roundness). This process enables a comparison of samples from the same population over time, or the comparison of a treated population to a control. Although vesicles in suspensions are heterogeneous in sizes and shapes and have distinctively non-homogeneous distribution throughout the suspension, this method allows for the capture and analysis of repeatable vesicle samples that are representative of the population inspected. PMID:25426933

  7. Mineralogy and grain size of surficial sediment from the Big Lost River drainage and vicinity, with chemical and physical characteristics of geologic materials from selected sites at the Idaho National Engineering Laboratory, Idaho

    USGS Publications Warehouse

    Bartholomay, R.C.; Knobel, L.L.; Davis, L.C.

    1989-01-01

    The U.S. Geological Survey 's Idaho National Engineering Laboratory project office, in cooperation with the U.S. Department of Energy, collected 35 samples of surficial sediments from the Big Lost River drainage and vicinity from July 1987 through August 1988 for analysis of grain-size distribution, bulk mineralogy, and clay mineralogy. Samples were collected from 11 sites in the channel and 5 sites in overbank deposits of the Big Lost River, 6 sites in the spreading areas that receive excess flow from the Big Lost River during peak flow conditions, 7 sites in the natural sinks and playas of the Big Lost River, 1 site in the Little Lost River Sink, and 5 sites from other small, isolated closed basins. Eleven samples from the Big Lost River channel deposits had a mean of 1.9 and median of 0.8 weight percent in the less than 0.062 mm fraction. The other 24 samples had a mean of 63.3 and median of 63.7 weight percent for the same size fraction. Mineralogy data are consistent with grain-size data. The Big Lost River channel deposits had mean and median percent mineral abundances of total clays and detrital mica of 10 and 10%, respectively, whereas the remaining 24 samples had mean and median values of 24% and 22.5% , respectively. (USGS)

  8. History and progress of the North American Soil Geochemical Landscapes Project, 2001-2010

    USGS Publications Warehouse

    Smith, David B.; Cannon, William F.; Woodruff, Laurel G.; Rivera, Francisco Moreira; Rencz, Andrew N.; Garrett, Robert G.

    2012-01-01

    In 2007, the U.S. Geological Survey, the Geological Survey of Canada, and the Mexican Geological Survey initiated a low-density (1 site per 1600 km2, 13323 sites) geochemical and mineralogical survey of North American soils (North American Soil Geochemical Landscapes Project). Sampling and analytical protocols were developed at a series of workshops in 20032004 and pilot studies were conducted from 20042007. The ideal sampling protocol at each site includes a sample from 05 cm depth, a composite of the soil A horizon, and a sample from the soil C horizon. The 3, HClO4, and HF. Separate methods are used for As, Hg, Se, and total C on this same size fraction. The major mineralogical components are determined by a quantitative X-ray diffraction method. Sampling in the conterminous U.S. was completed in 2010 (c. 4800 sites) with chemical and mineralogical analysis currently underway. In Mexico, approximately 66% of the sampling (871 sites) had been done by the end of 2010 with completion expected in 2012. After completing sampling in the Maritime provinces and portions of other provinces (472 sites, 7.6% of the total), Canada withdrew from the project in 2010. Preliminary results for a swath from the central U.S. to Florida clearly show the effects of soil parent material and climate on the chemical and mineralogical composition of soils. A sample archive will be established and made available for future investigations.

  9. Bayesian evaluation of effect size after replicating an original study

    PubMed Central

    van Aert, Robbie C. M.; van Assen, Marcel A. L. M.

    2017-01-01

    The vast majority of published results in the literature is statistically significant, which raises concerns about their reliability. The Reproducibility Project Psychology (RPP) and Experimental Economics Replication Project (EE-RP) both replicated a large number of published studies in psychology and economics. The original study and replication were statistically significant in 36.1% in RPP and 68.8% in EE-RP suggesting many null effects among the replicated studies. However, evidence in favor of the null hypothesis cannot be examined with null hypothesis significance testing. We developed a Bayesian meta-analysis method called snapshot hybrid that is easy to use and understand and quantifies the amount of evidence in favor of a zero, small, medium and large effect. The method computes posterior model probabilities for a zero, small, medium, and large effect and adjusts for publication bias by taking into account that the original study is statistically significant. We first analytically approximate the methods performance, and demonstrate the necessity to control for the original study’s significance to enable the accumulation of evidence for a true zero effect. Then we applied the method to the data of RPP and EE-RP, showing that the underlying effect sizes of the included studies in EE-RP are generally larger than in RPP, but that the sample sizes of especially the included studies in RPP are often too small to draw definite conclusions about the true effect size. We also illustrate how snapshot hybrid can be used to determine the required sample size of the replication akin to power analysis in null hypothesis significance testing and present an easy to use web application (https://rvanaert.shinyapps.io/snapshot/) and R code for applying the method. PMID:28388646

  10. Ceramic Technology for Advanced Heat Engines Project Semiannual Progress Report for Period October 1985 Through March 1986

    DTIC Science & Technology

    1986-08-01

    materials (2.2 w/o and 3.0 w/o MgO). The other two batches (2.8 w/o and 3.1 w/o MgO), of higher purity, were made using E-10 zirconia powder from...CID) powders Two methods have been used for the coprecipitation of doped zirconia powders from solutions of chemical precursors. (4) Method I, for...of powder, approximate sample size 3.2 Kg (6.4 Kg for zirconia powder ); 342 3. Random selection of sample; 4. Partial drying of sample to reduce caking

  11. A Comparison of For-Profit and Traditional Universities' Student Persistence, Graduation Rate, and Job Placement

    ERIC Educational Resources Information Center

    Sandlin, Deborah L.

    2017-01-01

    This research project is a study comparing for-profit schools and traditional universities related to student persistence, graduation rate, and job placement. The results based on a sample size of 92 students indicate that there is no significant difference between persistence, graduation rates and successful job placement at either school. There…

  12. Randomness, Sample Size, Imagination and Metacognition: Making Judgments about Differences in Data Sets

    ERIC Educational Resources Information Center

    Stack, Sue; Watson, Jane

    2013-01-01

    There is considerable research on the difficulties students have in conceptualising individual concepts of probability and statistics (see for example, Bryant & Nunes, 2012; Jones, 2005). The unit of work developed for the action research project described in this article is specifically designed to address some of these in order to help…

  13. Barriers to Application of E-Learning in Training Activities of SMEs

    ERIC Educational Resources Information Center

    Anderson, Randy J.; Wielicki, Tomasz; Anderson, Lydia E.

    2010-01-01

    This paper reports on the on-going study of Small and Mid-Size Enterprises (SMEs) in the Central California concerning their use of Information and Communication Technology (ICT). This research project analyzed data from a sample of 161 SMEs. Specifically, this part of the study is investigating the major barriers to applications of e-learning…

  14. Battery condenser system PM10 emission factors and rates for cotton gins: Method 201A PM10 sizing cyclones

    USDA-ARS?s Scientific Manuscript database

    This manuscript is part of a series of manuscripts that to characterize cotton gin emissions from the standpoint of stack sampling. The impetus behind this project was the urgent need to collect additional cotton gin emissions data to address current regulatory issues. A key component of this study ...

  15. Global Particle Size Distributions: Measurements during the Atmospheric Tomography (ATom) Project

    NASA Astrophysics Data System (ADS)

    Brock, C. A.; Williamson, C.; Kupc, A.; Froyd, K. D.; Richardson, M.; Weinzierl, B.; Dollner, M.; Schuh, H.; Erdesz, F.

    2016-12-01

    The Atmospheric Tomography (ATom) project is a three-year NASA-sponsored program to map the spatial and temporal distribution of greenhouse gases, reactive species, and aerosol particles from the Arctic to the Antarctic. In situ measurements are being made on the NASA DC-8 research aircraft, which will make four global circumnavigations of the Earth over the mid-Pacific and mid-Atlantic Oceans while continuously profiling between 0.2 and 13 km altitude. In situ microphysical measurements will provide an unique and unprecedented dataset of aerosol particle size distributions between 0.004 and 50 µm diameter. This unbiased, representative dataset allows investigation of new particle formation in the remote troposphere, placing strong observational constraints on the chemical and physical mechanisms that govern particle formation and growth to cloud-active sizes. Particles from 0.004 to 0.055 µm are measured with 10 condensation particle counters. Particles with diameters from 0.06 to 1.0 µm are measured with one-second resolution using two ultra-high sensitivity aerosol size spectrometers (UHSASes). A laser aerosol spectrometer (LAS) measures particle size distributions between 0.12 and 10 µm in diameter. Finally, a cloud, aerosol and precipitation spectrometer (CAPS) underwing optical spectrometer probe sizes ambient particles with diameters from 0.5 to 50 µm and images and sizes precipitation-sized particles. Additional particle instruments on the payload include a high-resolution time-of-flight aerosol mass spectrometer and a single particle laser-ablation aerosol mass spectrometer. The instruments are calibrated in the laboratory and on the aircraft. Calibrations are checked in flight by introducing four sizes of polystyrene latex (PSL) microspheres into the sampling inlet. The CAPS probe is calibrated using PSL and glass microspheres that are aspirated into the sample volume. Comparisons between the instruments and checks with the calibration aerosol indicate flight performance within uncertainties expected from laboratory calibrations. Analysis of data from the first ATom circuit in August 2016 shows high concentrations of newly formed particles in the tropical middle and upper troposphere and Arctic lower troposphere.

  16. Reduction of 30-Day Preventable Pediatric Readmission Rates With Postdischarge Phone Calls Utilizing a Patient- and Family-Centered Care Approach.

    PubMed

    Flippo, Renee; NeSmith, Elizabeth; Stark, Nancy; Joshua, Thomas; Hoehn, Michelle

    2015-01-01

    The purpose of this project was to evaluate the effectiveness of postdischarge phone calls on 30-day preventable readmission rates within the pediatric hospital setting. Because the unit of care identified was patients and their families, a patient- and family-centered care approach was used. The project used an exploratory design and was conducted at a 154-bed pediatric hospital facility. A sample of 15 patients meeting project inclusion criteria was selected before and after the intervention, and medical records were reviewed to identify if a 30-day preventable readmission had occurred. Medical record review revealed four preintervention readmissions, providing an overall preintervention readmission rate of 26%. Only one readmission was discovered after the intervention, yielding an overall postintervention readmission rate of 6%. The sample size was not large enough to show statistical significance, but clinical significance was seen, with readmission rates for the project target population decreasing below the rates recorded in 2012. Copyright © 2015 National Association of Pediatric Nurse Practitioners. Published by Elsevier Inc. All rights reserved.

  17. Streamflow and suspended-sediment transport in Garvin Brook, Winona County, southeastern Minnesota: Hydrologic data for 1982

    USGS Publications Warehouse

    Payne, G.A.

    1983-01-01

    Streamflow and suspended-sediment-transport data were collected in Garvin Brook watershed in Winona County, southeastern Minnesota, during 1982. The data collection was part of a study to determine the effectiveness of agricultural best-management practices designed to improve rural water quality. The study is part of a Rural Clean Water Program demonstration project undertaken by the U.S. Department of Agriculture. Continuous streamflow data were collected at three gaging stations during March through September 1982. Suspended-sediment samples were collected at two of the gaging stations. Samples were collected manually at weekly intervals. During periods of rapidly changing stage, samples were collected at 30-minute to 12-hour intervals by stage-activated automatic samplers. The samples were analyzed for suspendedsediment concentration and particle-size distribution. Particlesize distributions were also determined for one set of bedmaterial samples collected at each sediment-sampling site. The streamflow and suspended-sediment-concentration data were used to compute records of mean-daily flow, mean-daily suspended-sediment concentration, and daily suspended-sediment discharge. The daily records are documented and results of analyses for particle-size distribution and of vertical sampling in the stream cross sections are given.

  18. Detector-unit-dependent calibration for polychromatic projections of rock core CT.

    PubMed

    Li, Mengfei; Zhao, Yunsong; Zhang, Peng

    2017-01-01

    Computed tomography (CT) plays an important role in digital rock analysis, which is a new prospective technique for oil and gas industry. But the artifacts in CT images will influence the accuracy of the digital rock model. In this study, we proposed and demonstrated a novel method to restore detector-unit-dependent functions for polychromatic projection calibration by scanning some simple shaped reference samples. As long as the attenuation coefficients of the reference samples are similar to the scanned object, the size or position is not needed to be exactly known. Both simulated and real data were used to verify the proposed method. The results showed that the new method reduced both beam hardening artifacts and ring artifacts effectively. Moreover, the method appeared to be quite robust.

  19. Comparison. US P-61 and Delft sediment samplers

    USGS Publications Warehouse

    Beverage, Joseph P.; Williams, David T.

    1990-01-01

    The Delft Bottle (DB) is a flow-through device designed by the Delft Hydraulic Laboratory (DHL), The Netherlands, to sample sand-sized sediment suspended in streams. The US P-61 sampler was designed by the Federal Interagency Sedimentation Project (FISP) at the St. Anthony Falls Hydraulic Laboratory, Minneapolis, Minnesota, to collect suspended sediment from deep, swift rivers. The results of two point-sampling tests in the United States, the Mississippi River near Vicksburg, Mississippi, in 1983 and the Colorado River near Blythe, California, in 1984, are provided in this report. These studies compare sand-transport rates, rather than total sediment-transport rates, because fine material washes through the DB sampler. In the United States, the commonly used limits for sand-sized material are 0.062 mm to 2.00 mm (Vanoni 1975).

  20. The Impact of Accelerating Faster than Exponential Population Growth on Genetic Variation

    PubMed Central

    Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian

    2014-01-01

    Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models’ effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times. PMID:24381333

  1. The impact of accelerating faster than exponential population growth on genetic variation.

    PubMed

    Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian

    2014-03-01

    Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.

  2. Low-dose 4D cardiac imaging in small animals using dual source micro-CT

    NASA Astrophysics Data System (ADS)

    Holbrook, M.; Clark, D. P.; Badea, C. T.

    2018-01-01

    Micro-CT is widely used in preclinical studies, generating substantial interest in extending its capabilities in functional imaging applications such as blood perfusion and cardiac function. However, imaging cardiac structure and function in mice is challenging due to their small size and rapid heart rate. To overcome these challenges, we propose and compare improvements on two strategies for cardiac gating in dual-source, preclinical micro-CT: fast prospective gating (PG) and uncorrelated retrospective gating (RG). These sampling strategies combined with a sophisticated iterative image reconstruction algorithm provide faster acquisitions and high image quality in low-dose 4D (i.e. 3D  +  Time) cardiac micro-CT. Fast PG is performed under continuous subject rotation which results in interleaved projection angles between cardiac phases. Thus, fast PG provides a well-sampled temporal average image for use as a prior in iterative reconstruction. Uncorrelated RG incorporates random delays during sampling to prevent correlations between heart rate and sampling rate. We have performed both simulations and animal studies to validate these new sampling protocols. Sampling times for 1000 projections using fast PG and RG were 2 and 3 min, respectively, and the total dose was 170 mGy each. Reconstructions were performed using a 4D iterative reconstruction technique based on the split Bregman method. To examine undersampling robustness, subsets of 500 and 250 projections were also used for reconstruction. Both sampling strategies in conjunction with our iterative reconstruction method are capable of resolving cardiac phases and provide high image quality. In general, for equal numbers of projections, fast PG shows fewer errors than RG and is more robust to undersampling. Our results indicate that only 1000-projection based reconstruction with fast PG satisfies a 5% error criterion in left ventricular volume estimation. These methods promise low-dose imaging with a wide range of preclinical applications in cardiac imaging.

  3. PSP toxin levels and plankton community composition and abundance in size-fractionated vertical profiles during spring/summer blooms of the toxic dinoflagellate Alexandrium fundyense in the Gulf of Maine and on Georges Bank, 2007, 2008, and 2010: 2. Plankton community composition and abundance.

    PubMed

    Petitpas, Christian M; Turner, Jefferson T; Deeds, Jonathan R; Keafer, Bruce A; McGillicuddy, Dennis J; Milligan, Peter J; Shue, Vangie; White, Kevin D; Anderson, Donald M

    2014-05-01

    As part of the Gulf of Maine Toxicity (GOMTOX) project, we determined Alexandrium fundyense abundance, paralytic shellfish poisoning (PSP) toxin levels in various plankton size fractions, and the community composition of potential grazers of A. fundyense in plankton size fractions during blooms of this toxic dinoflagellate in the coastal Gulf of Maine and on Georges Bank in spring and summer of 2007, 2008, and 2010. PSP toxins and A. fundyense cells were found throughout the sampled water column (down to 50 m) in the 20-64 μm size fractions. While PSP toxins were widespread throughout all size classes of the zooplankton grazing community, the majority of the toxin was measured in the 20-64 μm size fraction. A. fundyense cellular toxin content estimated from field samples was significantly higher in the coastal Gulf of Maine than on Georges Bank. Most samples containing PSP toxins in the present study had diverse assemblages of grazers. However, some samples clearly suggested PSP toxin accumulation in several different grazer taxa including tintinnids, heterotrophic dinoflagellates of the genus Protoperidinium , barnacle nauplii, the harpacticoid copepod Microsetella norvegica , the calanoid copepods Calanus finmarchicus and Pseudocalanus spp., the marine cladoceran Evadne nordmanni , and hydroids of the genus Clytia . Thus, a diverse assemblage of zooplankton grazers accumulated PSP toxins through food-web interactions. This raises the question of whether PSP toxins pose a potential human health risk not only from nearshore bivalve shellfish, but also potentially from fish and other upper-level consumers in zooplankton-based pelagic food webs.

  4. Size characterization of airborne SiO2 nanoparticles with on-line and off-line measurement techniques: an interlaboratory comparison study

    NASA Astrophysics Data System (ADS)

    Motzkus, C.; Macé, T.; Gaie-Levrel, F.; Ducourtieux, S.; Delvallee, A.; Dirscherl, K.; Hodoroaba, V.-D.; Popov, I.; Popov, O.; Kuselman, I.; Takahata, K.; Ehara, K.; Ausset, P.; Maillé, M.; Michielsen, N.; Bondiguel, S.; Gensdarmes, F.; Morawska, L.; Johnson, G. R.; Faghihi, E. M.; Kim, C. S.; Kim, Y. H.; Chu, M. C.; Guardado, J. A.; Salas, A.; Capannelli, G.; Costa, C.; Bostrom, T.; Jämting, Å. K.; Lawn, M. A.; Adlem, L.; Vaslin-Reimann, S.

    2013-10-01

    Results of an interlaboratory comparison on size characterization of SiO2 airborne nanoparticles using on-line and off-line measurement techniques are discussed. This study was performed in the framework of Technical Working Area (TWA) 34—"Properties of Nanoparticle Populations" of the Versailles Project on Advanced Materials and Standards (VAMAS) in the project no. 3 "Techniques for characterizing size distribution of airborne nanoparticles". Two types of nano-aerosols, consisting of (1) one population of nanoparticles with a mean diameter between 30.3 and 39.0 nm and (2) two populations of non-agglomerated nanoparticles with mean diameters between, respectively, 36.2-46.6 nm and 80.2-89.8 nm, were generated for characterization measurements. Scanning mobility particle size spectrometers (SMPS) were used for on-line measurements of size distributions of the produced nano-aerosols. Transmission electron microscopy, scanning electron microscopy, and atomic force microscopy were used as off-line measurement techniques for nanoparticles characterization. Samples were deposited on appropriate supports such as grids, filters, and mica plates by electrostatic precipitation and a filtration technique using SMPS controlled generation upstream. The results of the main size distribution parameters (mean and mode diameters), obtained from several laboratories, were compared based on metrological approaches including metrological traceability, calibration, and evaluation of the measurement uncertainty. Internationally harmonized measurement procedures for airborne SiO2 nanoparticles characterization are proposed.

  5. SparseBeads data: benchmarking sparsity-regularized computed tomography

    NASA Astrophysics Data System (ADS)

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  6. Galaxy interactions and strength of nuclear activity

    NASA Technical Reports Server (NTRS)

    Simkin, S. M.

    1990-01-01

    Analysis of data in the literature for differential velocities and projected separations of nearby Seyfert galaxies with possible companions shows a clear difference in projected separations between type 1's and type 2's. This kinematic difference between the two activity classes reinforces other independent evidence that their different nuclear characteristics are related to a non-nuclear physical distinction between the two classes. The differential velocities and projected separations of the galaxy pairs in this sample yield mean galaxy masses, sizes, and mass to light ratios which are consistent with those found by the statistical methods of Karachentsev. Although the galaxy sample discussed here is too small and too poorly defined to provide robust support for these conclusions, the results strongly suggest that nuclear activity in Seyfert galaxies is associated with gravitational perturbations from companion galaxies, and that there are physical distinctions between the host companions of Seyfert 1 and Seyfert 2 nuclei which may depend both on the environment and the structure of the host galaxy itself.

  7. Using an integral projection model to assess the effect of temperature on the growth of gilthead seabream Sparus aurata.

    PubMed

    Heather, F J; Childs, D Z; Darnaude, A M; Blanchard, J L

    2018-01-01

    Accurate information on the growth rates of fish is crucial for fisheries stock assessment and management. Empirical life history parameters (von Bertalanffy growth) are widely fitted to cross-sectional size-at-age data sampled from fish populations. This method often assumes that environmental factors affecting growth remain constant over time. The current study utilized longitudinal life history information contained in otoliths from 412 juveniles and adults of gilthead seabream, Sparus aurata, a commercially important species fished and farmed throughout the Mediterranean. Historical annual growth rates over 11 consecutive years (2002-2012) in the Gulf of Lions (NW Mediterranean) were reconstructed to investigate the effect of temperature variations on the annual growth of this fish. S. aurata growth was modelled linearly as the relationship between otolith size at year t against otolith size at the previous year t-1. The effect of temperature on growth was modelled with linear mixed effects models and a simplified linear model to be implemented in a cohort Integral Projection Model (cIPM). The cIPM was used to project S. aurata growth, year to year, under different temperature scenarios. Our results determined current increasing summer temperatures to have a negative effect on S. aurata annual growth in the Gulf of Lions. They suggest that global warming already has and will further have a significant impact on S. aurata size-at-age, with important implications for age-structured stock assessments and reference points used in fisheries.

  8. Two sympatric new species of woodlizards (Hoplocercinae, Enyalioides) from Cordillera Azul National Park in northeastern Peru

    PubMed Central

    Venegas, Pablo J.; Torres-Carvajal, Omar; Duran, Vilma; de Queiroz, Kevin

    2013-01-01

    Abstract We report the discovery of two sympatric new species of Enyalioides from a montane rainforest of the Río Huallaga basin in northeastern Peru. Among other characters, the first new species is distinguishable from other Enyalioides by the combination of the following characters: strongly keeled ventral scales, more than 37 longitudinal rows of dorsals in a transverse line between the dorsolateral crests at midbody, low vertebral crest on the neck with vertebrals on neck similar in size to those between hind limbs, projecting scales on body or limbs absent, 96 mm maximum SVL in both sexes, and caudals increasing in size posteriorly within each autotomic segment. The second new species differs from other species of Enyalioides in having strongly keeled ventral scales, scales posterior to the superciliaries forming a longitudinal row of strongly projecting scales across the lateral edge of the skull roof in adults of both sexes, 31 or fewer longitudinal rows of strongly keeled dorsals in a transverse line between the dorsolateral crests at midbody, vertebrals on neck more than five times the size of vertebrals between hind limbs in adult males, projecting scales on body or limbs absent, and caudals increasing in size posteriorly within each autotomic segment. We also present an updated molecular phylogenetic tree of hoplocercines including new samples of Enyalioides rudolfarndti, Enyalioides rubrigularis, both species described in this paper, as well as an updated identification key for species of Hoplocercinae. PMID:23794824

  9. Fast optical transillumination tomography with large-size projection acquisition.

    PubMed

    Huang, Hsuan-Ming; Xia, Jinjun; Haidekker, Mark A

    2008-10-01

    Techniques such as optical coherence tomography and diffuse optical tomography have been shown to effectively image highly scattering samples such as tissue. An additional modality has received much less attention: Optical transillumination (OT) tomography, a modality that promises very high acquisition speed for volumetric scans. With the motivation to image tissue-engineered blood vessels for possible biomechanical testing, we have developed a fast OT device using a collimated, noncoherent beam with a large diameter together with a large-size CMOS camera that has the ability to acquire 3D projections in a single revolution of the sample. In addition, we used accelerated iterative reconstruction techniques to improve image reconstruction speed, while at the same time obtaining better image quality than through filtered backprojection. The device was tested using ink-filled polytetrafluorethylene tubes to determine geometric reconstruction accuracy and recovery of absorbance. Even in the presence of minor refractive index mismatch, the weighted error of the measured radius was <5% in all cases, and a high linear correlation of ink absorbance determined with a photospectrometer of R(2) = 0.99 was found, although the OT device systematically underestimated absorbance. Reconstruction time was improved from several hours (standard arithmetic reconstruction) to 90 s per slice with our optimized algorithm. Composed of only a light source, two spatial filters, a sample bath, and a CMOS camera, this device was extremely simple and cost-efficient to build.

  10. Measuring solids concentration in stormwater runoff: comparison of analytical methods.

    PubMed

    Clark, Shirley E; Siu, Christina Y S

    2008-01-15

    Stormwater suspended solids typically are quantified using one of two methods: aliquot/subsample analysis (total suspended solids [TSS]) or whole-sample analysis (suspended solids concentration [SSC]). Interproject comparisons are difficult because of inconsistencies in the methods and in their application. To address this concern, the suspended solids content has been measured using both methodologies in many current projects, but the question remains about how to compare these values with historical water-quality data where the analytical methodology is unknown. This research was undertaken to determine the effect of analytical methodology on the relationship between these two methods of determination of the suspended solids concentration, including the effect of aliquot selection/collection method and of particle size distribution (PSD). The results showed that SSC was best able to represent the known sample concentration and that the results were independent of the sample's PSD. Correlations between the results and the known sample concentration could be established for TSS samples, but they were highly dependent on the sample's PSD and on the aliquot collection technique. These results emphasize the need to report not only the analytical method but also the particle size information on the solids in stormwater runoff.

  11. Signal Sampling for Efficient Sparse Representation of Resting State FMRI Data

    PubMed Central

    Ge, Bao; Makkie, Milad; Wang, Jin; Zhao, Shijie; Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhang, Shu; Zhang, Wei; Han, Junwei; Guo, Lei; Liu, Tianming

    2015-01-01

    As the size of brain imaging data such as fMRI grows explosively, it provides us with unprecedented and abundant information about the brain. How to reduce the size of fMRI data but not lose much information becomes a more and more pressing issue. Recent literature studies tried to deal with it by dictionary learning and sparse representation methods, however, their computation complexities are still high, which hampers the wider application of sparse representation method to large scale fMRI datasets. To effectively address this problem, this work proposes to represent resting state fMRI (rs-fMRI) signals of a whole brain via a statistical sampling based sparse representation. First we sampled the whole brain’s signals via different sampling methods, then the sampled signals were aggregate into an input data matrix to learn a dictionary, finally this dictionary was used to sparsely represent the whole brain’s signals and identify the resting state networks. Comparative experiments demonstrate that the proposed signal sampling framework can speed-up by ten times in reconstructing concurrent brain networks without losing much information. The experiments on the 1000 Functional Connectomes Project further demonstrate its effectiveness and superiority. PMID:26646924

  12. A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data

    PubMed Central

    Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming

    2018-01-01

    The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks. PMID:29706880

  13. A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data.

    PubMed

    Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming

    2018-01-01

    The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks.

  14. The projection of a test genome onto a reference population and applications to humans and archaic hominins.

    PubMed

    Yang, Melinda A; Harris, Kelley; Slatkin, Montgomery

    2014-12-01

    We introduce a method for comparing a test genome with numerous genomes from a reference population. Sites in the test genome are given a weight, w, that depends on the allele frequency, x, in the reference population. The projection of the test genome onto the reference population is the average weight for each x, [Formula: see text]. The weight is assigned in such a way that, if the test genome is a random sample from the reference population, then [Formula: see text]. Using analytic theory, numerical analysis, and simulations, we show how the projection depends on the time of population splitting, the history of admixture, and changes in past population size. The projection is sensitive to small amounts of past admixture, the direction of admixture, and admixture from a population not sampled (a ghost population). We compute the projections of several human and two archaic genomes onto three reference populations from the 1000 Genomes project-Europeans, Han Chinese, and Yoruba-and discuss the consistency of our analysis with previously published results for European and Yoruba demographic history. Including higher amounts of admixture between Europeans and Yoruba soon after their separation and low amounts of admixture more recently can resolve discrepancies between the projections and demographic inferences from some previous studies. Copyright © 2014 by the Genetics Society of America.

  15. Upper Mississippi River System Environmental Management Program, Definite Project Report (R-6F) with Integrated Environmental Assessment (R-6F), Peoria Lake Enhancement, Peoria Pool, Illinois Waterway, River Miles 178.5 to 181, State of Illinois. Technical Appendices

    DTIC Science & Technology

    1990-06-01

    pioneer bioengineering work has been conducted by Hollis H. Allen at WES in Corps reservoirs and on Corps projects on coastal shorelines, and by...several test locations to determine stability, growth of plants, effectiveness as a temporary breakwater, longevity , and ability to withstand ice and...Sampling to Characterize Size Demography and Density of Freshwater Mussel Communities." Bulletin of the American Malacological Union, Inc, 6: 49-54. J-40

  16. PSP toxin levels and plankton community composition and abundance in size-fractionated vertical profiles during spring/summer blooms of the toxic dinoflagellate Alexandrium fundyense in the Gulf of Maine and on Georges Bank, 2007, 2008, and 2010: 1. Toxin levels.

    PubMed

    Deeds, Jonathan R; Petitpas, Christian M; Shue, Vangie; White, Kevin D; Keafer, Bruce A; McGillicuddy, Dennis J; Milligan, Peter J; Anderson, Donald M; Turner, Jefferson T

    2014-05-01

    As part of the NOAA ECOHAB funded Gulf of Maine Toxicity (GOMTOX) project, we determined Alexandrium fundyense abundance, paralytic shellfish poisoning (PSP) toxin composition, and concentration in quantitatively-sampled size-fractionated (20-64, 64-100, 100-200, 200-500, and > 500 μm) particulate water samples, and the community composition of potential grazers of A. fundyense in these size fractions, at multiple depths (typically 1, 10, 20 m, and near-bottom) during 10 large-scale sampling cruises during the A. fundyense bloom season (May-August) in the coastal Gulf of Maine and on Georges Bank in 2007, 2008, and 2010. Our findings were as follows: (1) when all sampling stations and all depths were summed by year, the majority (94% ± 4%) of total PSP toxicity was contained in the 20-64 μm size fraction; (2) when further analyzed by depth, the 20-64 μm size fraction was the primary source of toxin for 97% of the stations and depths samples over three years; (3) overall PSP toxin profiles were fairly consistent during the three seasons of sampling with gonyautoxins (1, 2, 3, and 4) dominating (90.7% ± 5.5%), followed by the carbamate toxins saxitoxin (STX) and neosaxitoxin (NEO) (7.7% ± 4.5%), followed by n-sulfocarbamoyl toxins (C1 and 2, GTX5) (1.3% ± 0.6%), followed by all decarbamoyl toxins (dcSTX, dcNEO, dcGTX2&3) (< 1%), although differences were noted between PSP toxin compositions for nearshore coastal Gulf of Maine sampling stations compared to offshore Georges Bank sampling stations for 2 out of 3 years; (4) surface cell counts of A. fundyense were a fairly reliable predictor of the presence of toxins throughout the water column; and (5) nearshore surface cell counts of A. fundyense in the coastal Gulf of Maine were not a reliable predictor of A. fundyense populations offshore on Georges Bank for 2 out of the 3 years sampled.

  17. PSP toxin levels and plankton community composition and abundance in size-fractionated vertical profiles during spring/summer blooms of the toxic dinoflagellate Alexandrium fundyense in the Gulf of Maine and on Georges Bank, 2007, 2008, and 2010: 1. Toxin levels

    PubMed Central

    Deeds, Jonathan R.; Petitpas, Christian M.; Shue, Vangie; White, Kevin D.; Keafer, Bruce A.; McGillicuddy, Dennis J.; Milligan, Peter J.; Anderson, Donald M.; Turner, Jefferson T.

    2014-01-01

    As part of the NOAA ECOHAB funded Gulf of Maine Toxicity (GOMTOX)1 project, we determined Alexandrium fundyense abundance, paralytic shellfish poisoning (PSP) toxin composition, and concentration in quantitatively-sampled size-fractionated (20–64, 64–100, 100–200, 200–500, and > 500 μm) particulate water samples, and the community composition of potential grazers of A. fundyense in these size fractions, at multiple depths (typically 1, 10, 20 m, and near-bottom) during 10 large-scale sampling cruises during the A. fundyense bloom season (May–August) in the coastal Gulf of Maine and on Georges Bank in 2007, 2008, and 2010. Our findings were as follows: (1) when all sampling stations and all depths were summed by year, the majority (94% ± 4%) of total PSP toxicity was contained in the 20–64 μm size fraction; (2) when further analyzed by depth, the 20–64 μm size fraction was the primary source of toxin for 97% of the stations and depths samples over three years; (3) overall PSP toxin profiles were fairly consistent during the three seasons of sampling with gonyautoxins (1, 2, 3, and 4) dominating (90.7% ± 5.5%), followed by the carbamate toxins saxitoxin (STX) and neosaxitoxin (NEO) (7.7% ± 4.5%), followed by n-sulfocarbamoyl toxins (C1 and 2, GTX5) (1.3% ± 0.6%), followed by all decarbamoyl toxins (dcSTX, dcNEO, dcGTX2&3) (< 1%), although differences were noted between PSP toxin compositions for nearshore coastal Gulf of Maine sampling stations compared to offshore Georges Bank sampling stations for 2 out of 3 years; (4) surface cell counts of A. fundyense were a fairly reliable predictor of the presence of toxins throughout the water column; and (5) nearshore surface cell counts of A. fundyense in the coastal Gulf of Maine were not a reliable predictor of A. fundyense populations offshore on Georges Bank for 2 out of the 3 years sampled. PMID:25076816

  18. Notes for Brazil sampling frame evaluation trip

    NASA Technical Reports Server (NTRS)

    Horvath, R. (Principal Investigator); Hicks, D. R. (Compiler)

    1981-01-01

    Field notes describing a trip conducted in Brazil are presented. This trip was conducted for the purpose of evaluating a sample frame developed using LANDSAT full frame images by the USDA Economic and Statistics Service for the eventual purpose of cropland production estimation with LANDSAT by the Foreign Commodity Production Forecasting Project of the AgRISTARS program. Six areas were analyzed on the basis of land use, crop land in corn and soybean, field size and soil type. The analysis indicated generally successful use of LANDSAT images for purposes of remote large area land use stratification.

  19. VizieR Online Data Catalog: Galaxies and QSOs FIR size and surface brightness (Lutz+, 2016)

    NASA Astrophysics Data System (ADS)

    Lutz, D.; Berta, S.; Contursi, A.; Forster Schreiber, N. M.; Genzel, R.; Gracia-Carpio, J.; Herrera-Camus, R.; Netzer, H.; Sturm, E.; Tacconi, L. J.; Tadaki, K.; Veilleux, S.

    2016-08-01

    We use 70, 100, and 160um images from scan maps obtained with PACS on board Herschel, collecting archival data from various projects. In order to cover a wide range of galaxy properties, we first obtain an IR-selected local sample ranging from normal galaxies up to (ultra)luminous infrared galaxies. For that purpose, we searched the Herschel archive for all cz>=2000km/s objects from the IRAS Revised Bright Galaxy Sample (RBGS, Sanders et al., 2003, Cat. J/AJ/126/1607). (1 data file).

  20. Characterization of sediments from the Gulf of Mexico and Atlantic shorelines, Texas to Florida

    USGS Publications Warehouse

    Lisle, John T.; Comer, Norris N.

    2011-01-01

    In response to the Deepwater Horizon oil spill, sediment samples that were projected to have a high probability of being impacted by the oil were collected from shoreline zones of Texas, Louisiana, Mississippi, Alabama, and Florida. Sixty-one sites were sampled and analyzed for hydraulic conductivity, porosity, and grain-size distribution. The objective of this effort was to provide a set of baseline data on sediment characteristics known to directly influence (1) the penetration of oil into coastal sediments and (2) the efficacy of chemical and (or) bioremediation.

  1. Cranial base topology and basic trends in the facial evolution of Homo.

    PubMed

    Bastir, Markus; Rosas, Antonio

    2016-02-01

    Facial prognathism and projection are important characteristics in human evolution but their three-dimensional (3D) architectonic relationships to basicranial morphology are not clear. We used geometric morphometrics and measured 51 3D-landmarks in a comparative sample of modern humans (N = 78) and fossil Pleistocene hominins (N = 10) to investigate the spatial features of covariation between basicranial and facial elements. The study reveals complex morphological integration patterns in craniofacial evolution of Middle and Late Pleistocene hominins. A downwards-orientated cranial base correlates with alveolar maxillary prognathism, relatively larger faces, and relatively larger distances between the anterior cranial base and the frontal bone (projection). This upper facial projection correlates with increased overall relative size of the maxillary alveolar process. Vertical facial height is associated with tall nasal cavities and is accommodated by an elevated anterior cranial base, possibly because of relations between the cribriform and the nasal cavity in relation to body size and energetics. Variation in upper- and mid-facial projection can further be produced by basicranial topology in which the midline base and nasal cavity are shifted anteriorly relative to retracted lateral parts of the base and the face. The zygomatics and the middle cranial fossae act together as bilateral vertical systems that are either projected or retracted relative to the midline facial elements, causing either midfacial flatness or midfacial projection correspondingly. We propose that facial flatness and facial projection reflect classical principles of craniofacial growth counterparts, while facial orientation relative to the basicranium as well as facial proportions reflect the complex interplay of head-body integration in the light of encephalization and body size decrease in Middle to Late Pleistocene hominin evolution. Developmental and evolutionary patterns of integration may only partially overlap morphologically, and traditional concepts taken from research on two-dimensional (2D) lateral X-rays and sections have led to oversimplified and overly mechanistic models of basicranial evolution. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Particle size analysis of amalgam powder and handpiece generated specimens.

    PubMed

    Drummond, J L; Hathorn, R M; Cailas, M D; Karuhn, R

    2001-07-01

    The increasing interest in the elimination of amalgam particles from the dental waste (DW) stream, requires efficient devices to remove these particles. The major objective of this project was to perform a comparative evaluation of five basic methods of particle size analysis in terms of the instrument's ability to quantify the size distribution of the various components within the DW stream. The analytical techniques chosen were image analysis via scanning electron microscopy, standard wire mesh sieves, X-ray sedigraphy, laser diffraction, and electrozone analysis. The DW particle stream components were represented by amalgam powders and handpiece/diamond bur generated specimens of enamel; dentin, whole tooth, and condensed amalgam. Each analytical method quantified the examined DW particle stream components. However, X-ray sedigraphy, electrozone, and laser diffraction particle analyses provided similar results for determining particle distributions of DW samples. These three methods were able to more clearly quantify the properties of the examined powder and condensed amalgam samples. Furthermore, these methods indicated that a significant fraction of the DW stream contains particles less than 20 microm. The findings of this study indicated that the electrozone method is likely to be the most effective technique for quantifying the particle size distribution in the DW particle stream. This method required a relative small volume of sample, was not affected by density, shape factors or optical properties, and measured a sufficient number of particles to provide a reliable representation of the particle size distribution curve.

  3. An empirical identification and categorisation of training best practices for ERP implementation projects

    NASA Astrophysics Data System (ADS)

    Esteves, Jose Manuel

    2014-11-01

    Although training is one of the most cited critical success factors in Enterprise Resource Planning (ERP) systems implementations, few empirical studies have attempted to examine the characteristics of management of the training process within ERP implementation projects. Based on the data gathered from a sample of 158 respondents across four stakeholder groups involved in ERP implementation projects, and using a mixed method design, we have assembled a derived set of training best practices. Results suggest that the categorised list of ERP training best practices can be used to better understand training activities in ERP implementation projects. Furthermore, the results reveal that the company size and location have an impact on the relevance of training best practices. This empirical study also highlights the need to investigate the role of informal workplace trainers in ERP training activities.

  4. Grain Growth and Precipitation Behavior of Iridium Alloy DOP-26 During Long Term Aging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierce, Dean T.; Muralidharan, Govindarajan; Fox, Ethan E.

    The influence of long term aging on grain growth and precipitate sizes and spatial distribution in iridium alloy DOP-26 was studied. Samples of DOP-26 were fabricated using the new process, recrystallized for 1 hour (h) at 1375 C, then aged at either 1300, 1400, or 1500 C for times ranging from 50 to 10,000 h. Grain size measurements (vertical and horizontal mean linear intercept and horizontal and vertical projection) and analyses of iridium-thorium precipitates (size and spacing) were made on the longitudinal, transverse, and rolling surfaces of the as-recrystallized and aged specimens from which the two-dimensional spatial distribution and meanmore » sizes of the precipitates were obtained. The results obtained from this study are intended to provide input to grain growth models.« less

  5. Impacts on seafloor geology of drilling disturbance in shallow waters.

    PubMed

    Corrêa, Iran C S; Toldo, Elírio E; Toledo, Felipe A L

    2010-08-01

    This paper describes the effects of drilling disturbance on the seafloor of the upper continental slope of the Campos Basin, Brazil, as a result of the project Environmental Monitoring of Offshore Drilling for Petroleum Exploration--MAPEM. Field sampling was carried out surrounding wells, operated by the company PETROBRAS, to compare sediment properties of the seafloor, including grain-size distribution, total organic carbon, and clay mineral composition, prior to drilling with samples obtained 3 and 22 months after drilling. The sampling grid used had 74 stations, 68 of which were located along 7 radials from the well up to a distance of 500 m. The other 6 stations were used as reference, and were located 2,500 m from the well. The results show no significant sedimentological variation in the area affected by drilling activity. The observed sedimentological changes include a fining of grain size, increase in total organic carbon, an increase in gibbsite, illite, and smectite, and a decrease in kaolinite after drilling took place.

  6. Summary and Synthesis: How to Present a Research Proposal.

    PubMed

    Setia, Maninder Singh; Panda, Saumya

    2017-01-01

    This concluding module attempts to synthesize the key learning points discussed during the course of the previous ten sets of modules on methodology and biostatistics. The objective of this module is to discuss how to present a model research proposal, based on whatever was discussed in the preceding modules. The lynchpin of a research proposal is the protocol, and the key component of a protocol is the study design. However, one must not neglect the other areas, be it the project summary through which one catches the eyes of the reviewer of the proposal, or the background and the literature review, or the aims and objectives of the study. Two critical areas in the "methods" section that cannot be emphasized more are the sampling strategy and a formal estimation of sample size. Without a legitimate sample size, none of the conclusions based on the statistical analysis would be valid. Finally, the ethical parameters of the study should be well understood by the researchers, and that should get reflected in the proposal.

  7. Summary and Synthesis: How to Present a Research Proposal

    PubMed Central

    Setia, Maninder Singh; Panda, Saumya

    2017-01-01

    This concluding module attempts to synthesize the key learning points discussed during the course of the previous ten sets of modules on methodology and biostatistics. The objective of this module is to discuss how to present a model research proposal, based on whatever was discussed in the preceding modules. The lynchpin of a research proposal is the protocol, and the key component of a protocol is the study design. However, one must not neglect the other areas, be it the project summary through which one catches the eyes of the reviewer of the proposal, or the background and the literature review, or the aims and objectives of the study. Two critical areas in the “methods” section that cannot be emphasized more are the sampling strategy and a formal estimation of sample size. Without a legitimate sample size, none of the conclusions based on the statistical analysis would be valid. Finally, the ethical parameters of the study should be well understood by the researchers, and that should get reflected in the proposal. PMID:28979004

  8. 78 FR 28146 - Fisheries of the Caribbean, Gulf of Mexico, and South Atlantic; Shrimp Fishery of the Gulf of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-14

    ... major period of emigration of these shrimp from Texas estuaries to the Gulf of Mexico (Gulf) so the... to project when brown shrimp in Texas bays and estuaries will reach a mean size of 3.54 in (90 mm), and begin strong emigrations out of the bays and estuaries during maximum duration ebb tides. Sampling...

  9. Towards a plot size for Canada's national forest inventory

    Treesearch

    Steen Magnussen; P. Boudewyn; M. Gillis

    2000-01-01

    A proposed national forest inventory for Canada is to report on the state and trends of resource attributes gathered mainly from aerial photos of sample plots located on a national grid. A pilot project in New Brunswick indicates it takes about 2,800 square 400-ha plots (10 percent inventoried) to achieve a relative standard error of 10 percent or less on 14 out of 17...

  10. Directions for new developments on statistical design and analysis of small population group trials.

    PubMed

    Hilgers, Ralf-Dieter; Roes, Kit; Stallard, Nigel

    2016-06-14

    Most statistical design and analysis methods for clinical trials have been developed and evaluated where at least several hundreds of patients could be recruited. These methods may not be suitable to evaluate therapies if the sample size is unavoidably small, which is usually termed by small populations. The specific sample size cut off, where the standard methods fail, needs to be investigated. In this paper, the authors present their view on new developments for design and analysis of clinical trials in small population groups, where conventional statistical methods may be inappropriate, e.g., because of lack of power or poor adherence to asymptotic approximations due to sample size restrictions. Following the EMA/CHMP guideline on clinical trials in small populations, we consider directions for new developments in the area of statistical methodology for design and analysis of small population clinical trials. We relate the findings to the research activities of three projects, Asterix, IDeAl, and InSPiRe, which have received funding since 2013 within the FP7-HEALTH-2013-INNOVATION-1 framework of the EU. As not all aspects of the wide research area of small population clinical trials can be addressed, we focus on areas where we feel advances are needed and feasible. The general framework of the EMA/CHMP guideline on small population clinical trials stimulates a number of research areas. These serve as the basis for the three projects, Asterix, IDeAl, and InSPiRe, which use various approaches to develop new statistical methodology for design and analysis of small population clinical trials. Small population clinical trials refer to trials with a limited number of patients. Small populations may result form rare diseases or specific subtypes of more common diseases. New statistical methodology needs to be tailored to these specific situations. The main results from the three projects will constitute a useful toolbox for improved design and analysis of small population clinical trials. They address various challenges presented by the EMA/CHMP guideline as well as recent discussions about extrapolation. There is a need for involvement of the patients' perspective in the planning and conduct of small population clinical trials for a successful therapy evaluation.

  11. A pilot evaluation of the Arts for Life project in end-of-life care.

    PubMed

    Gallagher, Ann

    To explore and evaluate the experience of the 'Arts for Life' project among patients or residents with terminal illness in nursing homes and the community, their relatives and practitioners. Semi-structured qualitative interviews were conducted with five patients/residents and two relatives. Five practitioners including a music therapist and a digital artist were involved in the study. The evaluation discusses the perceived benefits and challenges of the Arts for Life project from the perspectives of a small sample of patients/residents, relatives, nurses and arts facilitators. The findings suggested that the Arts for Life project provided opportunities for participants to express their creativity and individuality. A range of benefits was identified. Participants described their involvement in the project as providing a means of escapism, relief from pain and anxiety, and as helping them to engage with loss. The findings should be interpreted with caution as a result of the small sample size. The evaluation suggests that people with different diagnoses benefit in different ways from participation in the arts. An understanding of the role of the arts enables nurses to appreciate different responses to end-of-life care. Larger scale research is required with focused evaluation objectives to explore further the issues raised.

  12. Numerical sedimentation particle-size analysis using the Discrete Element Method

    NASA Astrophysics Data System (ADS)

    Bravo, R.; Pérez-Aparicio, J. L.; Gómez-Hernández, J. J.

    2015-12-01

    Sedimentation tests are widely used to determine the particle size distribution of a granular sample. In this work, the Discrete Element Method interacts with the simulation of flow using the well known one-way-coupling method, a computationally affordable approach for the time-consuming numerical simulation of the hydrometer, buoyancy and pipette sedimentation tests. These tests are used in the laboratory to determine the particle-size distribution of fine-grained aggregates. Five samples with different particle-size distributions are modeled by about six million rigid spheres projected on two-dimensions, with diameters ranging from 2.5 ×10-6 m to 70 ×10-6 m, forming a water suspension in a sedimentation cylinder. DEM simulates the particle's movement considering laminar flow interactions of buoyant, drag and lubrication forces. The simulation provides the temporal/spatial distributions of densities and concentrations of the suspension. The numerical simulations cannot replace the laboratory tests since they need the final granulometry as initial data, but, as the results show, these simulations can identify the strong and weak points of each method and eventually recommend useful variations and draw conclusions on their validity, aspects very difficult to achieve in the laboratory.

  13. Small Class Size and Its Effects.

    ERIC Educational Resources Information Center

    Biddle, Bruce J.; Berliner, David C.

    2002-01-01

    Describes several prominent early grades small-class-size projects and their effects on student achievement: Indiana's Project Prime Time, Tennessee's Project STAR (Student/Teacher Achievement Ratio), Wisconsin's SAGE (Student Achievement Guarantee in Education) Program, and the California class-size-reduction program. Lists several conclusions,…

  14. Effect of milling atmosphere on structural and magnetic properties of Ni-Zn ferrite nanocrystalline

    NASA Astrophysics Data System (ADS)

    Hajalilou, Abdollah; Hashim, Mansor; Ebrahimi-Kahrizsangi, Reza; Masoudi Mohamad, Taghi

    2015-04-01

    Powder mixtures of Zn, NiO, and Fe2O3 are mechanically alloyed by high energy ball milling to produce Ni-Zn ferrite with a nominal composition of Ni0.36Zn0.64Fe2O4. The effects of milling atmospheres (argon, air, and oxygen), milling time (from 0 to 30 h) and heat treatment are studied. The products are characterized using x-ray diffractometry, field emission scanning electron microscopy equipped with energy-dispersive x-ray spectroscopy, and transmitted electron microscopy. The results indicate that the desired ferrite is not produced during the milling in the samples milled under either air or oxygen atmospheres. In those samples milled under argon, however, Zn/NiO/Fe2O3 reacts with a solid-state diffusion mode to produce Ni-Zn ferrite nanocrystalline in a size of 8 nm after 30-h-milling. The average crystallite sizes decrease to 9 nm and 10 nm in 30-h-milling samples under air and oxygen atmospheres, respectively. Annealing the 30-h-milling samples at 600 °C for 2 h leads to the formation of a single phase of Ni-Zn ferrite, an increase of crystallite size, and a reduction of internal lattice strain. Finally, the effects of the milling atmosphere and heating temperature on the magnetic properties of the 30-h-milling samples are investigated. Project supported by the University Putra Malaysia Graduate Research Fellowship Section.

  15. Local reconstruction in computed tomography of diffraction enhanced imaging

    NASA Astrophysics Data System (ADS)

    Huang, Zhi-Feng; Zhang, Li; Kang, Ke-Jun; Chen, Zhi-Qiang; Zhu, Pei-Ping; Yuan, Qing-Xi; Huang, Wan-Xia

    2007-07-01

    Computed tomography of diffraction enhanced imaging (DEI-CT) based on synchrotron radiation source has extremely high sensitivity of weakly absorbing low-Z samples in medical and biological fields. The authors propose a modified backprojection filtration(BPF)-type algorithm based on PI-line segments to reconstruct region of interest from truncated refraction-angle projection data in DEI-CT. The distribution of refractive index decrement in the sample can be directly estimated from its reconstruction images, which has been proved by experiments at the Beijing Synchrotron Radiation Facility. The algorithm paves the way for local reconstruction of large-size samples by the use of DEI-CT with small field of view based on synchrotron radiation source.

  16. Fringe projection application for surface variation analysis on helical shaped silicon breast

    NASA Astrophysics Data System (ADS)

    Vairavan, R.; Ong, N. R.; Sauli, Z.; Shahimin, M. M.; Kirtsaeng, S.; Sakuntasathien, S.; Alcain, J. B.; Paitong, P.; Retnasamy, V.

    2017-09-01

    Breast carcinoma is rated as a second collective cause of cancer associated death among adult females. Detection of the disease at an early stage would enhance the chance for survival. Established detection methods such as mammography, ultrasound and MRI are classified as non invasive breast cancer detection modality, but however they are not entire non-invasive as physical contact still occurs to the breast. Thus requirement for a complete non invasive and non contact is evident. Therefore, in this work, a novel application of digital fringe projection for early detection of breast cancer based on breast surface analysis is reported. Phase shift fringe projection technique and pixel tracing method was utilized to analyze the breast surface change due to the incidence of breast lump. Results have shown that the digital fringe projection is capable in detecting the existence of 1 cm sized lump within the breast sample.

  17. Ground truth crop proportion summaries for US segments, 1976-1979

    NASA Technical Reports Server (NTRS)

    Horvath, R. (Principal Investigator); Rice, D.; Wessling, T.

    1981-01-01

    The original ground truth data was collected, digitized, and registered to LANDSAT data for use in the LACIE and AgRISTARS projects. The numerous ground truth categories were consolidated into fewer classes of crops or crop conditions and counted occurrences of these classes for each segment. Tables are presented in which the individual entries are the percentage of total segment area assigned to a given class. The ground truth summaries were prepared from a 20% sample of the scene. An analysis indicates that this size of sample provides sufficient accuracy for use of the data in initial segment screening.

  18. NASA's Atmospheric Effects of Aviation Project

    NASA Technical Reports Server (NTRS)

    Cofer, W. Randy, III; Anderson, Bruce E.; Connors, V. S.; Wey, C. C.; Sanders, T.; Winstead, E. L.; Pui, C.; Chen, Da-ren; Hagen, D. E.; Whitefield, P.

    2001-01-01

    During August 1-14, 1999, NASA's Atmospheric Effects of Aviation Project (AEAP) convened a workshop at the NASA Langley Research Center to try to determine why such a wide variation in aerosol emissions indices and chemical and physical properties has been reported by various independent AEAP-supported research teams trying to characterize the exhaust emissions of subsonic commercial aircraft. This workshop was divided into two phases, a laboratory phase and a field phase. The laboratory phase consisted of supplying known particle number densities (concentrations) and particle size distributions to a common manifold for the participating research teams to sample and analyze. The field phase was conducted on an aircraft run-up pad. Participating teams actually sampled aircraft exhaust generated by a Langley T-38 Talon aircraft at 1 and 9 m behind the engine at engine powers ranging from 48 to 100 percent. Results from the laboratory phase of this intercomparison workshop are reported in this paper.

  19. Recruitment and retention of participants in randomised controlled trials: a review of trials funded and published by the United Kingdom Health Technology Assessment Programme.

    PubMed

    Walters, Stephen J; Bonacho Dos Anjos Henriques-Cadby, Inês; Bortolami, Oscar; Flight, Laura; Hind, Daniel; Jacques, Richard M; Knox, Christopher; Nadin, Ben; Rothwell, Joanne; Surtees, Michael; Julious, Steven A

    2017-03-20

    Substantial amounts of public funds are invested in health research worldwide. Publicly funded randomised controlled trials (RCTs) often recruit participants at a slower than anticipated rate. Many trials fail to reach their planned sample size within the envisaged trial timescale and trial funding envelope. To review the consent, recruitment and retention rates for single and multicentre randomised control trials funded and published by the UK's National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme. HTA reports of individually randomised single or multicentre RCTs published from the start of 2004 to the end of April 2016 were reviewed. Information was extracted, relating to the trial characteristics, sample size, recruitment and retention by two independent reviewers. Target sample size and whether it was achieved; recruitment rates (number of participants recruited per centre per month) and retention rates (randomised participants retained and assessed with valid primary outcome data). This review identified 151 individually RCTs from 787 NIHR HTA reports. The final recruitment target sample size was achieved in 56% (85/151) of the RCTs and more than 80% of the final target sample size was achieved for 79% of the RCTs (119/151). The median recruitment rate (participants per centre per month) was found to be 0.92 (IQR 0.43-2.79) and the median retention rate (proportion of participants with valid primary outcome data at follow-up) was estimated at 89% (IQR 79-97%). There is considerable variation in the consent, recruitment and retention rates in publicly funded RCTs. Investigators should bear this in mind at the planning stage of their study and not be overly optimistic about their recruitment projections. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  20. Recruitment and retention of participants in randomised controlled trials: a review of trials funded and published by the United Kingdom Health Technology Assessment Programme

    PubMed Central

    Bonacho dos Anjos Henriques-Cadby, Inês; Bortolami, Oscar; Flight, Laura; Hind, Daniel; Knox, Christopher; Nadin, Ben; Rothwell, Joanne; Surtees, Michael; Julious, Steven A

    2017-01-01

    Background Substantial amounts of public funds are invested in health research worldwide. Publicly funded randomised controlled trials (RCTs) often recruit participants at a slower than anticipated rate. Many trials fail to reach their planned sample size within the envisaged trial timescale and trial funding envelope. Objectives To review the consent, recruitment and retention rates for single and multicentre randomised control trials funded and published by the UK's National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme. Data sources and study selection HTA reports of individually randomised single or multicentre RCTs published from the start of 2004 to the end of April 2016 were reviewed. Data extraction Information was extracted, relating to the trial characteristics, sample size, recruitment and retention by two independent reviewers. Main outcome measures Target sample size and whether it was achieved; recruitment rates (number of participants recruited per centre per month) and retention rates (randomised participants retained and assessed with valid primary outcome data). Results This review identified 151 individually RCTs from 787 NIHR HTA reports. The final recruitment target sample size was achieved in 56% (85/151) of the RCTs and more than 80% of the final target sample size was achieved for 79% of the RCTs (119/151). The median recruitment rate (participants per centre per month) was found to be 0.92 (IQR 0.43–2.79) and the median retention rate (proportion of participants with valid primary outcome data at follow-up) was estimated at 89% (IQR 79–97%). Conclusions There is considerable variation in the consent, recruitment and retention rates in publicly funded RCTs. Investigators should bear this in mind at the planning stage of their study and not be overly optimistic about their recruitment projections. PMID:28320800

  1. Automated payload and instruments for astrobiology research developed and studied by German medium-sized space industry in cooperation with European academia

    NASA Astrophysics Data System (ADS)

    Schulte, Wolfgang; Hofer, Stefan; Hofmann, Peter; Thiele, Hans; von Heise-Rotenburg, Ralf; Toporski, Jan; Rettberg, Petra

    2007-06-01

    For more than a decade Kayser-Threde, a medium-sized enterprise of the German space industry, has been involved in astrobiology research in partnership with a variety of scientific institutes from all over Europe. Previous projects include exobiology research platforms in low Earth orbit on retrievable carriers and onboard the Space Station. More recently, exobiology payloads for in situ experimentation on Mars have been studied by Kayser-Threde under ESA contracts, specifically the ExoMars Pasteur Payload. These studies included work on a sample preparation and distribution systems for Martian rock/regolith samples, instrument concepts such as Raman spectroscopy and a Life Marker Chip, advanced microscope systems as well as robotic tools for astrobiology missions. The status of the funded technical studies and major results are presented. The reported industrial work was funded by ESA and the German Aerospace Center (DLR).

  2. The PATH project in eight European countries: an evaluation.

    PubMed

    Veillard, Jeremy Henri Maurice; Schiøtz, Michaela Louise; Guisset, Ann-Lise; Brown, Adalsteinn Davidson; Klazinga, Niek S

    2013-01-01

    This paper's aim is to evaluate the perceived impact and the enabling factors and barriers experienced by hospital staff participating in an international hospital performance measurement project focused on internal quality improvement. Semi-structured interviews involving international hospital performance measurement project coordinators, including 140 hospitals from eight European countries (Belgium, Estonia, France, Germany, Hungary, Poland, Slovakia and Slovenia). Inductively analyzing the interview transcripts was carried out using the grounded theory approach. Even when public reporting is absent, the project was perceived as having stimulated performance measurement and quality improvement initiatives in participating hospitals. Attention should be paid to leadership/ownership, context, content (project intrinsic features) and processes supporting elements. Generalizing the findings is limited by the study's small sample size. Possible implications for the WHO European Regional Office and for participating hospitals would be to assess hospital preparedness to participate in the PATH project, depending on context, process and structural elements; and enhance performance and practice benchmarking through suggested approaches. This research gathered rich and unique material related to an international performance measurement project. It derived actionable findings.

  3. Recreation Carrying Capacity Facts and Considerations. Report 11. Surry Mountain Lake Project Area.

    DTIC Science & Technology

    1980-07-01

    contributions of practical experience and knowledge , along with their assistance in arranging schedules, have made this carrying capacity research effort...This survey obtained six responses from boaters and water- skiers . 29 A~ .~ ~ ~ ~ ~ ~ ~ ~~~~~~~PEEI PAG ... ._DIAN.K+.+.+ 3+ ,.+ -,++ + _-NO FILM...User characteristics Table 17 indicates the characteristics of the boaters and water- skiers surveyed at Surry. The small sample size at

  4. Algorithms that Defy the Gravity of Learning Curve

    DTIC Science & Technology

    2017-04-28

    three nearest neighbour-based anomaly detectors, i.e., an ensemble of nearest neigh- bours, a recent nearest neighbour-based ensemble method called iNNE...streams. Note that the change in sample size does not alter the geometrical data characteristics discussed here. 3.1 Experimental Methodology ...need to be answered. 3.6 Comparison with conventional ensemble methods Given the theoretical results, the third aim of this project (i.e., identify the

  5. The design and assembly of aluminum mirrors of a three-mirror-anastigmat telescope

    NASA Astrophysics Data System (ADS)

    Chang, Shenq-Tsong; Lin, Yu-Chuan; Wu, Kun-Huan; Lien, Chun-Chieh; Huang, Ting-Ming; Tsay, Ho-Lin; Chan, Chia-Yen

    2017-09-01

    Better ground sampling distance (GSD) has been a trend for earth observation satellites. A long-focal-length telescope is required accordingly in systematic point of view. On the other hand, there is size constraint for such long-focal-length telescope especially in space projects. Three-mirror-anastigmat (TMA) was proven to have excellent features of correcting aberrations, wide spectral range and shorter physical requirement [1-3].

  6. System Design for Navy Occupational Standards Development

    DTIC Science & Technology

    2014-07-01

    including, Mr. Thomas Crain, Deputy Director, Workforce Classifications Department, LCDR Juan Carrasco, Michele Jackson, and Johnny Powell. David...and Carrasco, Juan ; Navy Job Analysis Management Project Description, NAVMAC, January 2010. 34  Lists of validated tasks, sorted by Functional...34 runat="server"> <div> <rsweb:ReportViewer ID="ReportViewerSample" runat="server" Font -Names="Verdana" Font -Size=Ŝpt

  7. Battery condenser system PM2.5 emission factors and rates for cotton gins: Method 201A combination PM10 and PM2.5 sizing cyclones

    USDA-ARS?s Scientific Manuscript database

    This report is part of a project to characterize cotton gin emissions from the standpoint of stack sampling. In 2006, EPA finalized and published a more stringent standard for particulate matter with nominal diameter less than or equal to 2.5 µm (PM2.5). This created an urgent need to collect additi...

  8. The Science of Climate Responsibility

    NASA Astrophysics Data System (ADS)

    Mitchell, D.; Frumhoff, P. C.; Sparrow, S.; Allen, M. R.

    2015-12-01

    Extreme events linked with human induced climate change have now been reported around the globe. Among the most troublesome impacts are increased wild fires, failed crop yields, extreme flooding and increase human mortality (Hansen and Cramer, 2015). Many of these impacts are predicted to increase into the future. Non-industrialised communities around the world will be the least capable of adapting, while the industrial communities, who are often responsible for historical carbon emissions, will find adaptation easier. Such a situation lends itself to the issue of responsibility. In order to assess responsibility, it must first be established where the major carbon and methane emissions are originating. It must then be estimated how these emissions project onto localised climate, which is often the primary indicator behind impacts on society. In this study, we address this question using a 25 km regional climate model capable of simulating climate thousands of times under the Weather@home citizen science project. The use of this framework allows us to generate huge data sample sizes, which can be put in the context of very low sample sizes of observational data. We concentrate on the 2003 heat wave over Europe, but show how this method could be applied to less data rich regions, including the Middle East and the Horn of Africa.

  9. Sample Design, Sample Augmentation, and Estimation for Wave 2 of the NSHAP

    PubMed Central

    English, Ned; Pedlow, Steven; Kwok, Peter K.

    2014-01-01

    Objectives. The sample for the second wave (2010) of National Social Life, Health, and Aging Project (NSHAP) was designed to increase the scientific value of the Wave 1 (2005) data set by revisiting sample members 5 years after their initial interviews and augmenting this sample where possible. Method. There were 2 important innovations. First, the scope of the study was expanded by collecting data from coresident spouses or romantic partners. Second, to maximize the representativeness of the Wave 2 data, nonrespondents from Wave 1 were again approached for interview in the Wave 2 sample. Results. The overall unconditional response rate for the Wave 2 panel was 74%; the conditional response rate of Wave 1 respondents was 89%; the conditional response rate of partners was 84%; and the conversion rate for Wave 1 nonrespondents was 26%. Discussion. The inclusion of coresident partners enhanced the study by allowing the examination of how intimate, household relationships are related to health trajectories and by augmenting the size of the NSHAP sample size for this and future waves. The uncommon strategy of returning to Wave 1 nonrespondents reduced potential bias by ensuring that to the extent possible the whole of the original sample forms the basis for the field effort. NSHAP Wave 2 achieved its field objectives of consolidating the panel, recruiting their resident spouses or romantic partners, and converting a significant proportion of Wave 1 nonrespondents. PMID:25360016

  10. Generating Mosaics of Astronomical Images

    NASA Technical Reports Server (NTRS)

    Bergou, Attila; Berriman, Bruce; Good, John; Jacob, Joseph; Katz, Daniel; Laity, Anastasia; Prince, Thomas; Williams, Roy

    2005-01-01

    "Montage" is the name of a service of the National Virtual Observatory (NVO), and of software being developed to implement the service via the World Wide Web. Montage generates science-grade custom mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. "Science-grade" in this context signifies that terrestrial and instrumental features are removed from images in a way that can be described quantitatively. "Custom" refers to user-specified parameters of projection, coordinates, size, rotation, and spatial sampling. The greatest value of Montage is expected to lie in its ability to analyze images at multiple wavelengths, delivering them on a common projection, coordinate system, and spatial sampling, and thereby enabling further analysis as though they were part of a single, multi-wavelength image. Montage will be deployed as a computation-intensive service through existing astronomy portals and other Web sites. It will be integrated into the emerging NVO architecture and will be executed on the TeraGrid. The Montage software will also be portable and publicly available.

  11. Fast Constrained Spectral Clustering and Cluster Ensemble with Random Projection

    PubMed Central

    Liu, Wenfen

    2017-01-01

    Constrained spectral clustering (CSC) method can greatly improve the clustering accuracy with the incorporation of constraint information into spectral clustering and thus has been paid academic attention widely. In this paper, we propose a fast CSC algorithm via encoding landmark-based graph construction into a new CSC model and applying random sampling to decrease the data size after spectral embedding. Compared with the original model, the new algorithm has the similar results with the increase of its model size asymptotically; compared with the most efficient CSC algorithm known, the new algorithm runs faster and has a wider range of suitable data sets. Meanwhile, a scalable semisupervised cluster ensemble algorithm is also proposed via the combination of our fast CSC algorithm and dimensionality reduction with random projection in the process of spectral ensemble clustering. We demonstrate by presenting theoretical analysis and empirical results that the new cluster ensemble algorithm has advantages in terms of efficiency and effectiveness. Furthermore, the approximate preservation of random projection in clustering accuracy proved in the stage of consensus clustering is also suitable for the weighted k-means clustering and thus gives the theoretical guarantee to this special kind of k-means clustering where each point has its corresponding weight. PMID:29312447

  12. Influences of spark plasma sintering temperature on the microstructures and thermoelectric properties of (Sr0.95Gd0.05)TiO3 ceramics

    NASA Astrophysics Data System (ADS)

    Li, Liang-Liang; Qin, Xiao-Ying; Liu, Yong-Fei; Liu, Quan-Zhen

    2015-06-01

    (Sr0.95Gd0.05)TiO3 (SGTO) ceramics are successfully prepared via spark plasma sintering (SPS) respectively at 1548, 1648, and 1748 K by using submicron-sized SGTO powders synthesized from a sol-gel method. The densities, microstructures, and thermoelectric properties of the SGTO ceramics are studied. Though the Seebeck coefficient shows no obvious difference in the case that SPS temperatures range from 1548 K to 1648 K, the electrical conductivity and the thermal conductivity increase remarkably due to the increase in grain size and density. The sample has a density higher than 98% theoretical density as the sintering temperature increases up to 1648 K and shows average grain sizes increasing from ˜ 0.7 μm to 7 μm until 1748 K. As a result, the maximum of the dimensionless figure of merit of ˜ 0.24 is achieved at ˜ 1000 K for the samples sintered at 1648 K and 1748 K, which was ˜ 71% larger than that (0.14 at ˜ 1000 K) for the sample sintered at 1548 K due to the enhancement of the power factor. Project supported by the National Natural Science Foundation of China (Grant Nos. 11174292, 51101150, and 11374306).

  13. Shipborne measurements of aerosol number size distribution and hygroscopicity over the North Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Royalty, T. M.; Phillips, B.; Dawson, K. W.; Reed, R. E.; Meskhidze, N.

    2016-12-01

    We report aerosol number size distribution and hygroscopicity data collected over the Pacific Ocean near the Hawaii Ocean Timeseries (HOT) Station ALOHA (centered near 22°N, 158°W). From June 25 to July 3, 2016 our hygroscopicity tandem differential mobility analyzer (HTDMA)/scanning mobility particle sizer (SMPS) system was deployed onboard of NOAA Ship Hi'ialakai that participated in mooring operations associated with the Woods Hole Oceanographic Institution WHOTS project. The ambient aerosol data was collected during the ship's planned operations. The inlet was located at the bow of the ship and the air samples were drawn (using 3/8 inch stainless steel tubing) inside a dry, air-conditioned lab. The region north of Oahu was very clean, with total particle number approximately 200 cm-3, occasionally dropping below 100 cm-3. We compare our particle number size distribution and hygroscopicity data with previously reported estimates. Our measurements contribute to process-level understanding of the role of sea spray aerosol in marine boundary layer cloud condensation nuclei (CCN) budget and provide crucial information to the community interested in studying and projecting climate change using Earth System Models.

  14. THE zCOSMOS-SINFONI PROJECT. I. SAMPLE SELECTION AND NATURAL-SEEING OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mancini, C.; Renzini, A.; Foerster Schreiber, N. M.

    2011-12-10

    The zCOSMOS-SINFONI project is aimed at studying the physical and kinematical properties of a sample of massive z {approx} 1.4-2.5 star-forming galaxies, through SINFONI near-infrared integral field spectroscopy (IFS), combined with the multiwavelength information from the zCOSMOS (COSMOS) survey. The project is based on one hour of natural-seeing observations per target, and adaptive optics (AO) follow-up for a major part of the sample, which includes 30 galaxies selected from the zCOSMOS/VIMOS spectroscopic survey. This first paper presents the sample selection, and the global physical characterization of the target galaxies from multicolor photometry, i.e., star formation rate (SFR), stellar mass, age,more » etc. The H{alpha} integrated properties, such as, flux, velocity dispersion, and size, are derived from the natural-seeing observations, while the follow-up AO observations will be presented in the next paper of this series. Our sample appears to be well representative of star-forming galaxies at z {approx} 2, covering a wide range in mass and SFR. The H{alpha} integrated properties of the 25 H{alpha} detected galaxies are similar to those of other IFS samples at the same redshifts. Good agreement is found among the SFRs derived from H{alpha} luminosity and other diagnostic methods, provided the extinction affecting the H{alpha} luminosity is about twice that affecting the continuum. A preliminary kinematic analysis, based on the maximum observed velocity difference across the source and on the integrated velocity dispersion, indicates that the sample splits nearly 50-50 into rotation-dominated and velocity-dispersion-dominated galaxies, in good agreement with previous surveys.« less

  15. The Antaeus Project - An orbital quarantine facility for analysis of planetary return samples

    NASA Technical Reports Server (NTRS)

    Sweet, H. C.; Bagby, J. R.; Devincenzi, D. L.

    1983-01-01

    A design is presented for an earth-orbiting facility for the analysis of planetary return samples under conditions of maximum protection against contamination but minimal damage to the sample. The design is keyed to a Mars sample return mission profile, returning 1 kg of documented subsamples, to be analyzed in low earth orbit by a small crew aided by automated procedures, tissue culture and microassay. The facility itself would consist of Spacelab shells, formed into five modules of different sizes with purposes of power supply, habitation, supplies and waste storage, the linking of the facility, and both quarantine and investigation of the samples. Three barriers are envisioned to protect the biosphere from any putative extraterrestrial organisms: sealed biological containment cabinets within the Laboratory Module, the Laboratory Module itself, and the conditions of space surrounding the facility.

  16. [Distribution, population parameters, and diet of Astropecten marginatus (Asteroidea: Astropectinidae) in the Venezuelan Atlantic coast].

    PubMed

    Ortega, Ileana; Martín, Alberto; Díaz, Yusbelly

    2011-03-01

    Astropecten marginatus is a sea star widely distributed in Northern and Eastern South America, found on sandy and muddy bottoms, in shallow and deep waters. To describe some of its ecological characteristics, we calculated it spatial-temporal distribution, population parameters (based on size and weight) and diet in the Orinoco Delta ecoregion (Venezuela). The ecoregion was divided in three sections: Golfo de Paria, Boca de Serpiente and Plataforma Deltana. Samples for the rainy and dry seasons came from megabenthos surveys of the "Línea Base Ambiental Plataforma Deltana (LBAPD)" and "Corocoro Fase I (CFI)" projects. The collected sea stars were measured, weighted and dissected by the oral side to extract their stomach and identify the preys consumed. A total of 570 sea stars were collected in LBAPD project and 306 in CFI one. The highest densities were found during the dry season in almost all sections. In LBAPD project the highest density was in "Plataforma Deltana" section (0.007 +/- 0.022 ind/m2 in dry season and 0.014 +/- 0.06 ind/m2 in rainy season) and in the CFI project the densities in "Golfo de Paria" section were 0.705 +/- 0.829 ind/m2 in rainy season and 1.027 +/- 1.107 ind/m2 in dry season. The most frequent size range was 3.1-4.6cm. The highest biomass was found in "Golfo de Paria" section (7.581 +/- 0.018 mg/m2 in dry season and 0.005 +/- 6.542 x 10(-06) mg/m2 in rainy season for 2004-2005 and 3.979 +/- 4.024 mg/m2 in dry season; and 3.117 +/- 3.137 mg/m2 in rainy season for 2006). A linear relationship was found between the sea star size and its weight but no relationship was observed between its size and the depth where it was collected. Mollusks are dominant in the sea star diet (47.4% in abundance). The diet in any of the sections, seasons or between projects or size class was heterogeneous, using multivariate ordinations (MDS) and SIMPER analysis and there was no difference in the prey number or food elements that a sea star can eat. Although A. marginatus has been described as a predator, in this study were also inferred scavenger and detritivorous habits.

  17. Environmental Particle Emissions due to Automated Drilling of Polypropylene Composites and Nanocomposites Reinforced with Talc, Montmorillonite and Wollastonite

    NASA Astrophysics Data System (ADS)

    Starost, K.; Frijns, E.; Laer, J. V.; Faisal, N.; Egizabal, A.; Elizextea, C.; Nelissen, I.; Blazquez, M.; Njuguna, J.

    2017-05-01

    In this study, the effect on nanoparticle emissions due to drilling on Polypropylene (PP) reinforced with 20% talc, 5% montmorillonite (MMT) and 5% Wollastonite (WO) is investigated. The study is the first to explore the nanoparticle release from WO and talc reinforced composites and compares the results to previously researched MMT. With 5% WO, equivalent tensile properties with a 10 % weight reduction were obtained relative to the reference 20% talc sample. The materials were fabricated through injection moulding. The nanorelease studies were undertaken using the controlled drilling methodology for nanoparticle exposure assessment developed within the European Commission funded SIRENA Life 11 ENV/ES/506 project. Measurements were taken using CPC and DMS50 equipment for real-time characterization and measurements. The particle number concentration (of particles <1000nm) and particle size distribution (4.87nm - 562.34nm) of the particles emitted during drilling were evaluated to investigate the effect of the silicate fillers on the particles released. The nano-filled samples exhibited a 33% decrease (MMT sample) or a 30% increase (WO sample) on the average particle number concentration released in comparison to the neat polypropylene sample. The size distribution data displayed a substantial percentage of the particles released from the PP, PP/WO and PP/MMT samples to be between 5-20nm, whereas the PP/talc sample emitted larger particle diameters.

  18. Comparison of projection neurons in the pontine nuclei and the nucleus reticularis tegmenti pontis of the rat.

    PubMed

    Schwarz, C; Thier, P

    1996-12-16

    Dendritic features of identified projection neurons in two precerebellar nuclei, the pontine nuclei (PN) and the nucleus reticularis tegmenti pontis (NRTP) were established by using a combination of retrograde tracing (injection of fluorogold or rhodamine labelled latex micro-spheres into the cerebellum) with subsequent intracellular filling (lucifer yellow) in fixed slices of pontine brainstem. A multivariate analysis revealed that parameters selected to characterize the dendritic tree such as size of dendritic field, number of branching points, and length of terminal dendrites did not deviate significantly between different regions of the PN and the NRTP. On the other hand, projection neurons in ventral regions of the PN were characterized by an irregular coverage of their distal dendrites by appendages while those in the dorsal PN and the NRTP were virtually devoid of them. The NRTP, dorsal, and medial PN tended to display larger somata and more primary dendrites than ventral regions of the PN. These differences, however, do not allow the differentiation of projection neurons within the PN from those in the NRTP. They rather reflect a dorso-ventral gradient ignoring the border between the nuclei. Accordingly, a cluster analysis did not differentiate distinct types of projection neurons within the total sample. In both nuclei, multiple linear regression analysis revealed that the size of dendritic fields was strongly correlated with the length of terminal dendrites while it did not depend on other parameters of the dendritic field. Thus, larger dendritic fields seem not to be accompanied by a higher complexity but rather may be used to extend the reach of a projection neuron within the arrangement of afferent terminals. We suggest that these similarities within dendritic properties in PN and NRTP projection neurons reflect similar processing of afferent information in both precerebellar nuclei.

  19. An Adaptation of the Distance Driven Projection Method for Single Pinhole Collimators in SPECT Imaging

    NASA Astrophysics Data System (ADS)

    Ihsani, Alvin; Farncombe, Troy

    2016-02-01

    The modelling of the projection operator in tomographic imaging is of critical importance especially when working with algebraic methods of image reconstruction. This paper proposes a distance-driven projection method which is targeted to single-pinhole single-photon emission computed tomograghy (SPECT) imaging since it accounts for the finite size of the pinhole, and the possible tilting of the detector surface in addition to other collimator-specific factors such as geometric sensitivity. The accuracy and execution time of the proposed method is evaluated by comparing to a ray-driven approach where the pinhole is sub-sampled with various sampling schemes. A point-source phantom whose projections were generated using OpenGATE was first used to compare the resolution of reconstructed images with each method using the full width at half maximum (FWHM). Furthermore, a high-activity Mini Deluxe Phantom (Data Spectrum Corp., Durham, NC, USA) SPECT resolution phantom was scanned using a Gamma Medica X-SPECT system and the signal-to-noise ratio (SNR) and structural similarity of reconstructed images was compared at various projection counts. Based on the reconstructed point-source phantom, the proposed distance-driven approach results in a lower FWHM than the ray-driven approach even when using a smaller detector resolution. Furthermore, based on the Mini Deluxe Phantom, it is shown that the distance-driven approach has consistently higher SNR and structural similarity compared to the ray-driven approach as the counts in measured projections deteriorates.

  20. A rational approach to legacy data validation when transitioning between electronic health record systems.

    PubMed

    Pageler, Natalie M; Grazier G'Sell, Max Jacob; Chandler, Warren; Mailes, Emily; Yang, Christine; Longhurst, Christopher A

    2016-09-01

    The objective of this project was to use statistical techniques to determine the completeness and accuracy of data migrated during electronic health record conversion. Data validation during migration consists of mapped record testing and validation of a sample of the data for completeness and accuracy. We statistically determined a randomized sample size for each data type based on the desired confidence level and error limits. The only error identified in the post go-live period was a failure to migrate some clinical notes, which was unrelated to the validation process. No errors in the migrated data were found during the 12- month post-implementation period. Compared to the typical industry approach, we have demonstrated that a statistical approach to sampling size for data validation can ensure consistent confidence levels while maximizing efficiency of the validation process during a major electronic health record conversion. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. VizieR Online Data Catalog: EBHIS spectra and HI column density maps (Winkel+, 2016)

    NASA Astrophysics Data System (ADS)

    Winkel, B.; Kerp, J.; Floeer, L.; Kalberla, P. M. W.; Ben Bekhti, N.; Keller, R.; Lenz, D.

    2015-11-01

    The EBHIS 1st data release comprises 21-cm neutral atomic hydrogen data of the Milky Way (-600km/s

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.

    The primary goal of this project is to evaluate x-ray spectra generated within a scanning electron microscope (SEM) to determine elemental composition of small samples. This will be accomplished by performing Monte Carlo simulations of the electron and photon interactions in the sample and in the x-ray detector. The elemental inventories will be determined by an inverse process that progressively reduces the difference between the measured and simulated x-ray spectra by iteratively adjusting composition and geometric variables in the computational model. The intended benefit of this work will be to develop a method to perform quantitative analysis on substandard samplesmore » (heterogeneous phases, rough surfaces, small sizes, etc.) without involving standard elemental samples or empirical matrix corrections (i.e., true standardless quantitative analysis).« less

  3. Aerosol Measurements of the Fine and Ultrafine Particle Content of Lunar Regolith

    NASA Technical Reports Server (NTRS)

    Greenberg, Paul S.; Chen, Da-Ren; Smith, Sally A.

    2007-01-01

    We report the first quantitative measurements of the ultrafine (20 to 100 nm) and fine (100 nm to 20 m) particulate components of Lunar surface regolith. The measurements were performed by gas-phase dispersal of the samples, and analysis using aerosol diagnostic techniques. This approach makes no a priori assumptions about the particle size distribution function as required by ensemble optical scattering methods, and is independent of refractive index and density. The method provides direct evaluation of effective transport diameters, in contrast to indirect scattering techniques or size information derived from two-dimensional projections of high magnification-images. The results demonstrate considerable populations in these size regimes. In light of the numerous difficulties attributed to dust exposure during the Apollo program, this outcome is of significant importance to the design of mitigation technologies for future Lunar exploration.

  4. Comparison of VRX CT scanners geometries

    NASA Astrophysics Data System (ADS)

    DiBianca, Frank A.; Melnyk, Roman; Duckworth, Christopher N.; Russ, Stephan; Jordan, Lawrence M.; Laughter, Joseph S.

    2001-06-01

    A technique called Variable-Resolution X-ray (VRX) detection greatly increases the spatial resolution in computed tomography (CT) and digital radiography (DR) as the field size decreases. The technique is based on a principle called `projective compression' that allows both the resolution element and the sampling distance of a CT detector to scale with the subject or field size. For very large (40 - 50 cm) field sizes, resolution exceeding 2 cy/mm is possible and for very small fields, microscopy is attainable with resolution exceeding 100 cy/mm. This paper compares the benefits obtainable with two different VRX detector geometries: the single-arm geometry and the dual-arm geometry. The analysis is based on Monte Carlo simulations and direct calculations. The results of this study indicate that the dual-arm system appears to have more advantages than the single-arm technique.

  5. The Challenges of Analyzing Behavioral Response Study Data: An Overview of the MOCHA (Multi-study OCean Acoustics Human Effects Analysis) Project.

    PubMed

    Harris, Catriona M; Thomas, Len; Sadykova, Dina; DeRuiter, Stacy L; Tyack, Peter L; Southall, Brandon L; Read, Andrew J; Miller, Patrick J O

    2016-01-01

    This paper describes the MOCHA project which aims to develop novel approaches for the analysis of data collected during Behavioral Response Studies (BRSs). BRSs are experiments aimed at directly quantifying the effects of controlled dosages of natural or anthropogenic stimuli (typically sound) on marine mammal behavior. These experiments typically result in low sample size, relative to variability, and so we are looking at a number of studies in combination to maximize the gain from each one. We describe a suite of analytical tools applied to BRS data on beaked whales, including a simulation study aimed at informing future experimental design.

  6. Project W-320, 241-C-106 sluicing HVAC calculations, Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, J.W.

    1998-08-07

    This supporting document has been prepared to make the FDNW calculations for Project W-320, readily retrievable. The report contains the following calculations: Exhaust airflow sizing for Tank 241-C-106; Equipment sizing and selection recirculation fan; Sizing high efficiency mist eliminator; Sizing electric heating coil; Equipment sizing and selection of recirculation condenser; Chiller skid system sizing and selection; High efficiency metal filter shielding input and flushing frequency; and Exhaust skid stack sizing and fan sizing.

  7. Using a Blender to Assess the Microbial Density of Encapsulated Organisms

    NASA Technical Reports Server (NTRS)

    Benardini, James N.; Koukol, Robert C.; Kazarians, Gayane A.; Schubert, Wayne W.; Morales, Fabian

    2013-01-01

    There are specific NASA requirements for source-specific encapsulated microbial density for encapsulated organisms in non-metallic materials. Projects such as the Mars Science Laboratory (MSL) that use large volumes of non-metallic materials of planetary protection concern pose a challenge to their bioburden budget. An optimized and adapted destructive hardware technology employing a commercial blender was developed to assess the embedded bioburden of thermal paint for the MSL project. The main objective of this optimization was to blend the painted foil pieces in the smallest sizes possible without excessive heating. The small size increased the surface area of the paint and enabled the release of the maximum number of encapsulated microbes. During a trial run, a piece of foil was placed into a blender for 10 minutes. The outside of the blender was very hot to the touch. Thus, the grinding was reduced to five 2-minute periods with 2-minute cooling periods between cycles. However, almost 20% of the foil fraction was larger (>2 mm). Thus, the largest fractions were then put into the blender and reground, resulting in a 71% increase in particles less than 1 mm in size, and a 76% decrease in particles greater than 2 mm in size. Because a repeatable process had been developed, a painted sample was processed with over 80% of the particles being <2 mm. It was not perceived that the properties (i.e. weight and rubber-like nature) of the painted/foil pieces would allow for a finer size distribution. With these constraints, each section would be ground for a total of 10 minutes with five cycles of a 2-minute pulse followed by a 2-minute pause. It was observed on several occasions that a larger blade affected the recovery of seeded spores by approximately half an order of magnitude. In the standard approach, each piece of painted foil was aseptically removed from the bag and placed onto a sterile tray where they were sized, cut, and cleaned. Each section was then weighed and placed into a sterile Waring Laboratory Blender. Samples were processed on low speed. The ground-up samples were then transferred to a 500-mL bottle using a sterile 1-in. (.2.5-cm) trim brush. To each of the bottles sterile planetary protection rinse solution was added and a modified NASA Standard Assay (NASA HBK 6022) was performed. Both vegetative and spore plates were analyzed.

  8. The relation between statistical power and inference in fMRI

    PubMed Central

    Wager, Tor D.; Yarkoni, Tal

    2017-01-01

    Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects), and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial—especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20–30) display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate) prediction methods and meta-analyses with related synthesis-oriented approaches. PMID:29155843

  9. Comparison of extended field-of-view reconstructions in C-arm flat-detector CT using patient size, shape or attenuation information.

    PubMed

    Kolditz, Daniel; Meyer, Michael; Kyriakou, Yiannis; Kalender, Willi A

    2011-01-07

    In C-arm-based flat-detector computed tomography (FDCT) it frequently happens that the patient exceeds the scan field of view (SFOV) in the transaxial direction because of the limited detector size. This results in data truncation and CT image artefacts. In this work three truncation correction approaches for extended field-of-view (EFOV) reconstructions have been implemented and evaluated. An FDCT-based method estimates the patient size and shape from the truncated projections by fitting an elliptical model to the raw data in order to apply an extrapolation. In a camera-based approach the patient is sampled with an optical tracking system and this information is used to apply an extrapolation. In a CT-based method the projections are completed by artificial projection data obtained from the CT data acquired in an earlier exam. For all methods the extended projections are filtered and backprojected with a standard Feldkamp-type algorithm. Quantitative evaluations have been performed by simulations of voxelized phantoms on the basis of the root mean square deviation and a quality factor Q (Q = 1 represents the ideal correction). Measurements with a C-arm FDCT system have been used to validate the simulations and to investigate the practical applicability using anthropomorphic phantoms which caused truncation in all projections. The proposed approaches enlarged the FOV to cover wider patient cross-sections. Thus, image quality inside and outside the SFOV has been improved. Best results have been obtained using the CT-based method, followed by the camera-based and the FDCT-based truncation correction. For simulations, quality factors up to 0.98 have been achieved. Truncation-induced cupping artefacts have been reduced, e.g., from 218% to less than 1% for the measurements. The proposed truncation correction approaches for EFOV reconstructions are an effective way to ensure accurate CT values inside the SFOV and to recover peripheral information outside the SFOV.

  10. A simple device to convert a small-animal PET scanner into a multi-sample tissue and injection syringe counter.

    PubMed

    Green, Michael V; Seidel, Jurgen; Choyke, Peter L; Jagoda, Elaine M

    2017-10-01

    We describe a simple fixture that can be added to the imaging bed of a small-animal PET scanner that allows for automated counting of multiple organ or tissue samples from mouse-sized animals and counting of injection syringes prior to administration of the radiotracer. The combination of imaging and counting capabilities in the same machine offers advantages in certain experimental settings. A polyethylene block of plastic, sculpted to mate with the animal imaging bed of a small-animal PET scanner, is machined to receive twelve 5-ml containers, each capable of holding an entire organ from a mouse-sized animal. In addition, a triangular cross-section slot is machined down the centerline of the block to secure injection syringes from 1-ml to 3-ml in size. The sample holder is scanned in PET whole-body mode to image all samples or in one bed position to image a filled injection syringe. Total radioactivity in each sample or syringe is determined from the reconstructed images of these objects using volume re-projection of the coronal images and a single region-of-interest for each. We tested the accuracy of this method by comparing PET estimates of sample and syringe activity with well counter and dose calibrator estimates of these same activities. PET and well counting of the same samples gave near identical results (in MBq, R 2 =0.99, slope=0.99, intercept=0.00-MBq). PET syringe and dose calibrator measurements of syringe activity in MBq were also similar (R 2 =0.99, slope=0.99, intercept=- 0.22-MBq). A small-animal PET scanner can be easily converted into a multi-sample and syringe counting device by the addition of a sample block constructed for that purpose. This capability, combined with live animal imaging, can improve efficiency and flexibility in certain experimental settings. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. The application of large numbers of pleasure boats to collect synoptic sea-truth for ERTS-1 overpasses

    NASA Technical Reports Server (NTRS)

    Klemas, V. (Principal Investigator); Davis, G.; Philpot, W.

    1974-01-01

    The author has identified the following significant results. In order to interpret and annotate current circulation and suspended sediment concentration maps derived from ERTS-1 digital tapes, the University of Delaware has been collecting water samples and other data from boats and helicopters. In order to increase the number of samples at the exact time of the ERTS-1 pass over Delaware Bay, pleasure craft were organized to obtain samples of the entire test site. On the ERTS-1 pass of July second, scientists were stationed at three public boat launches along the Bay to hand out sampling packets to interested boaters. The packets contained two litre sampling bottles, a map, data card, and a pen. The boaters were asked to fill the two bottles between 11 and 11:15 a.m., mark their location on the map, and fill out the data card. Forty-nine packets were handed out of which 40 were returned (82%). Only four of the 40 were not in the alloted time range. This gave 36 real time data points covering approximately 30 nautical miles. The samples are being analyzed for sediment concentration, particle size, and salinity. Participating boaters will receive a copy of an ERTS image of the Delaware Bay and a summary report of the project. Because of the success of the project, future use of pleasure boaters is being planned.

  12. Fully automatic characterization and data collection from crystals of biological macromolecules.

    PubMed

    Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander; Nurizzo, Didier; Bowler, Matthew W

    2015-08-01

    Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to the optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.

  13. Scalable screen-size enlargement by multi-channel viewing-zone scanning holography.

    PubMed

    Takaki, Yasuhiro; Nakaoka, Mitsuki

    2016-08-08

    Viewing-zone scanning holographic displays can enlarge both the screen size and the viewing zone. However, limitations exist in the screen size enlargement process even if the viewing zone is effectively enlarged. This study proposes a multi-channel viewing-zone scanning holographic display comprising multiple projection systems and a planar scanner to enable the scalable enlargement of the screen size. Each projection system produces an enlarged image of the screen of a MEMS spatial light modulator. The multiple enlarged images produced by the multiple projection systems are seamlessly tiled on the planar scanner. This screen size enlargement process reduces the viewing zones of the projection systems, which are horizontally scanned by the planar scanner comprising a rotating off-axis lens and a vertical diffuser to enlarge the viewing zone. A screen size of 7.4 in. and a viewing-zone angle of 43.0° are demonstrated.

  14. Effect of participating in Taiwan Quality Indicator Project on hospital efficiency in Taiwan.

    PubMed

    Chu, Hsuan-Lien; Wang, Chen-Chin; Shiu, Shu Fen

    2009-01-01

    To examine the effect of participating in Taiwan Quality Indicator Project (TQIP) on hospital efficiency and investigate why hospitals participate in TQIP. Our sample consists of 417 private not-for-profit hospitals in Taiwan during the 2001-2007 period. A simultaneous-equation model was performed to examine if hospitals that participated in TQIP were more efficient than hospitals that did not and investigate which variables affected the probabilities of hospitals' participation in the project. Our findings indicate that participating hospitals are more efficient than hospitals not participating in TQIP. In addition, hospital efficiency, hospital size, teaching status, and hospital age are positively related to participation in the project. These empirical results can be used as supporting evidence of success in improving performance through creating quality for hospitals that have participated in the project and offer insights into the value and strengths of the project. In addition, in recent years, reimbursement systems worldwide have partly moved payment methods to a pay-for-performance mechanism. In an attempt to control costs and improve quality, the policy makers should consider participating in Quality Indicator Project (QIP) as being one of the criteria to be reimbursed for performance.

  15. The structure of Turkish trait-descriptive adjectives.

    PubMed

    Somer, O; Goldberg, L R

    1999-03-01

    This description of the Turkish lexical project reports some initial findings on the structure of Turkish personality-related variables. In addition, it provides evidence on the effects of target evaluative homogeneity vs. heterogeneity (e.g., samples of well-liked target individuals vs. samples of both liked and disliked targets) on the resulting factor structures, and thus it provides a first test of the conclusions reached by D. Peabody and L. R. Goldberg (1989) using English trait terms. In 2 separate studies, and in 2 types of data sets, clear versions of the Big Five factor structure were found. And both studies replicated and extended the findings of Peabody and Goldberg; virtually orthogonal factors of relatively equal size were found in the homogeneous samples, and a more highly correlated set of factors with relatively large Agreeableness and Conscientiousness dimensions was found in the heterogeneous samples.

  16. Semantic size does not matter: "bigger" words are not recognized faster.

    PubMed

    Kang, Sean H K; Yap, Melvin J; Tse, Chi-Shing; Kurby, Christopher A

    2011-06-01

    Sereno, O'Donnell, and Sereno (2009) reported that words are recognized faster in a lexical decision task when their referents are physically large than when they are small, suggesting that "semantic size" might be an important variable that should be considered in visual word recognition research and modelling. We sought to replicate their size effect, but failed to find a significant latency advantage in lexical decision for "big" words (cf. "small" words), even though we used the same word stimuli as Sereno et al. and had almost three times as many subjects. We also examined existing data from visual word recognition megastudies (e.g., English Lexicon Project) and found that semantic size is not a significant predictor of lexical decision performance after controlling for the standard lexical variables. In summary, the null results from our lab experiment--despite a much larger subject sample size than Sereno et al.--converged with our analysis of megastudy lexical decision performance, leading us to conclude that semantic size does not matter for word recognition. Discussion focuses on why semantic size (unlike some other semantic variables) is unlikely to play a role in lexical decision.

  17. Anthropometric Sizing, Fit-Testing and Evaluation of the MBU-12/P Oral-Nasal Oxygen Mask

    DTIC Science & Technology

    1979-08-01

    Engineering Company, Sierra Madre , California, has a low profile single-unit facepiece in which a deformable silicone rubber face form is bonded to a...Churchill & Truett, 1957; Hertzberg et al., 1954). This comparison indicated that the 1967 sample was, on the average, older (2.64 years), taller (1.78...Support Special Projects Office, Wright-Patterson Air Force Base, Ohio, which, in turn, contracted with Sierra Engineering Company, Sierra Madre , California

  18. 2005 Earth-Mars Round Trip

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This paper presents, in viewgraph form, the 2005 Earth-Mars Round Trip. The contents include: 1) Lander; 2) Mars Sample Return Project; 3) Rover; 4) Rover Size Comparison; 5) Mars Ascent Vehicle; 6) Return Orbiter; 7) A New Mars Surveyor Program Architecture; 8) Definition Study Summary Result; 9) Mars Surveyor Proposed Architecture 2003, 2005 Opportunities; 10) Mars Micromissions Using Ariane 5; 11) Potential International Partnerships; 12) Proposed Integrated Architecture; and 13) Mars Exploration Program Report of the Architecture Team.

  19. Optical spectral imaging of degeneration of articular cartilage

    NASA Astrophysics Data System (ADS)

    Kinnunen, Jussi; Jurvelin, Jukka S.; Mäkitalo, Jaana; Hauta-Kasari, Markku; Vahimaa, Pasi; Saarakkala, Simo

    2010-07-01

    Osteoarthritis (OA) is a common musculoskeletal disorder often diagnosed during arthroscopy. In OA, visual color changes of the articular cartilage surface are typically observed. We demonstrate in vitro the potential of visible light spectral imaging (420 to 720 nm) to quantificate these color changes. Intact bovine articular cartilage samples (n=26) are degraded both enzymatically using the collagenase and mechanically using the emery paper (P60 grit, 269 μm particle size). Spectral images are analyzed by using standard CIELAB color coordinates and the principal component analysis (PCA). After collagenase digestion, changes in the CIELAB coordinates and projection of the spectra to PCA eigenvector are statistically significant (p<0.05). After mechanical degradation, the grinding tracks could not be visualized in the RGB presentation, i.e., in the visual appearance of the sample to the naked eye under the D65 illumination. However, after projecting to the chosen eigenvector, the grinding tracks are revealed. The tracks are also seen by using only one wavelength, i.e., 469 nm, however, the contrast in the projection image is 1.6 to 2.5 times higher. Our results support the idea that the spectral imaging can be used for evaluation of the integrity of the cartilage surface.

  20. Object Classification With Joint Projection and Low-Rank Dictionary Learning.

    PubMed

    Foroughi, Homa; Ray, Nilanjan; Hong Zhang

    2018-02-01

    For an object classification system, the most critical obstacles toward real-world applications are often caused by large intra-class variability, arising from different lightings, occlusion, and corruption, in limited sample sets. Most methods in the literature would fail when the training samples are heavily occluded, corrupted or have significant illumination or viewpoint variations. Besides, most of the existing methods and especially deep learning-based methods, need large training sets to achieve a satisfactory recognition performance. Although using the pre-trained network on a generic large-scale data set and fine-tune it to the small-sized target data set is a widely used technique, this would not help when the content of base and target data sets are very different. To address these issues simultaneously, we propose a joint projection and low-rank dictionary learning method using dual graph constraints. Specifically, a structured class-specific dictionary is learned in the low-dimensional space, and the discrimination is further improved by imposing a graph constraint on the coding coefficients, that maximizes the intra-class compactness and inter-class separability. We enforce structural incoherence and low-rank constraints on sub-dictionaries to reduce the redundancy among them, and also make them robust to variations and outliers. To preserve the intrinsic structure of data, we introduce a supervised neighborhood graph into the framework to make the proposed method robust to small-sized and high-dimensional data sets. Experimental results on several benchmark data sets verify the superior performance of our method for object classification of small-sized data sets, which include a considerable amount of different kinds of variation, and may have high-dimensional feature vectors.

  1. Design and verification of the miniature optical system for small object surface profile fast scanning

    NASA Astrophysics Data System (ADS)

    Chi, Sheng; Lee, Shu-Sheng; Huang, Jen, Jen-Yu; Lai, Ti-Yu; Jan, Chia-Ming; Hu, Po-Chi

    2016-04-01

    As the progress of optical technologies, different commercial 3D surface contour scanners are on the market nowadays. Most of them are used for reconstructing the surface profile of mold or mechanical objects which are larger than 50 mm×50 mm× 50 mm, and the scanning system size is about 300 mm×300 mm×100 mm. There are seldom optical systems commercialized for surface profile fast scanning for small object size less than 10 mm×10 mm×10 mm. Therefore, a miniature optical system has been designed and developed in this research work for this purpose. Since the most used scanning method of such system is line scan technology, we have developed pseudo-phase shifting digital projection technology by adopting projecting fringes and phase reconstruction method. A projector was used to project a digital fringe patterns on the object, and the fringes intensity images of the reference plane and of the sample object were recorded by a CMOS camera. The phase difference between the plane and object can be calculated from the fringes images, and the surface profile of the object was reconstructed by using the phase differences. The traditional phase shifting method was accomplished by using PZT actuator or precisely controlled motor to adjust the light source or grating and this is one of the limitations for high speed scanning. Compared with the traditional optical setup, we utilized a micro projector to project the digital fringe patterns on the sample. This diminished the phase shifting processing time and the controlled phase differences between the shifted phases become more precise. Besides, the optical path design based on a portable device scanning system was used to minimize the size and reduce the number of the system components. A screwdriver section about 7mm×5mm×5mm has been scanned and its surface profile was successfully restored. The experimental results showed that the measurement area of our system can be smaller than 10mm×10mm, the precision reached to +/-10μm, and the scanning time for each surface of an object was less than 15 seconds. This has proved that our system own the potential to be a fast scanning scanner for small object surface profile scanning.

  2. Truck size and weight enforcement technologies : state of the practice

    DOT National Transportation Integrated Search

    2009-05-01

    This report is a deliverable of Task 2 of FHWAs Truck Size and Weight Enforcement Technology Project. The primary project objective was to recommend strategies to encourage the deployment of roadside technologies to improve truck size and weight e...

  3. Recommendations for the use of mist nets for inventory and monitoring of bird populations

    USGS Publications Warehouse

    Ralph, C. John; Dunn, Erica H.; Peach, Will J.; Handel, Colleen M.; Ralph, C. John; Dunn, Erica H.

    2004-01-01

    We provide recommendations on the best practices for mist netting for the purposes of monitoring population parameters such as abundance and demography. Studies should be carefully thought out before nets are set up, to ensure that sampling design and estimated sample size will allow study objectives to be met. Station location, number of nets, type of nets, net placement, and schedule of operation should be determined by the goals of the particular project, and we provide guidelines for typical mist-net studies. In the absence of study-specific requirements for novel protocols, commonly used protocols should be used to enable comparison of results among studies. Regardless of the equipment, net layout, or netting schedule selected, it is important for all studies that operations be strictly standardized, and a well-written operation protocol will help in attaining this goal. We provide recommendations for data to be collected on captured birds, and emphasize the need for good training of project personnel

  4. NASA's Atmospheric Effects of Aviation Project: Results of the August 1999 Aerosol Measurement Intercomparison Workshop, Laboratory Phase

    NASA Technical Reports Server (NTRS)

    Cofer, W. Randy, III; Anderson, Bruce E.; Connors, V. S.; Wey, C. C.; Sanders, T.; Twohy, C.; Brock, C. A.; Winstead, E. L.; Pui, D.; Chen, Da-Ren

    2001-01-01

    During August 1-14, 1999, NASA's Atmospheric Effects of Aviation Project (AEAP) convened a workshop at the NASA Langley Research Center to try to determine why such a wide variation in aerosol emissions indices and chemical and physical properties have been reported by various independent AEAP-supported research teams trying to characterize the exhaust emissions of subsonic commercial aircraft. This workshop was divided into two phases, a laboratory phase and a field phase. The laboratory phase consisted of supplying known particle number densities (concentrations) and particle size distributions to a common manifold for the participating research teams to sample and analyze. The field phase was conducted on an aircraft run-up pad. Participating teams actually sampled aircraft exhaust generated by a Langley T-38 Talon aircraft at 1 and 9 m behind the engine at engine powers ranging from 48 to 100 percent. Results from the laboratory phase of this intercomparison workshop are reported in this paper.

  5. Exome sequencing of extreme phenotypes identifies DCTN4 as a modifier of chronic Pseudomonas aeruginosa infection in cystic fibrosis.

    PubMed

    Emond, Mary J; Louie, Tin; Emerson, Julia; Zhao, Wei; Mathias, Rasika A; Knowles, Michael R; Wright, Fred A; Rieder, Mark J; Tabor, Holly K; Nickerson, Deborah A; Barnes, Kathleen C; Gibson, Ronald L; Bamshad, Michael J

    2012-07-08

    Exome sequencing has become a powerful and effective strategy for the discovery of genes underlying Mendelian disorders. However, use of exome sequencing to identify variants associated with complex traits has been more challenging, partly because the sample sizes needed for adequate power may be very large. One strategy to increase efficiency is to sequence individuals who are at both ends of a phenotype distribution (those with extreme phenotypes). Because the frequency of alleles that contribute to the trait are enriched in one or both phenotype extremes, a modest sample size can potentially be used to identify novel candidate genes and/or alleles. As part of the National Heart, Lung, and Blood Institute (NHLBI) Exome Sequencing Project (ESP), we used an extreme phenotype study design to discover that variants in DCTN4, encoding a dynactin protein, are associated with time to first P. aeruginosa airway infection, chronic P. aeruginosa infection and mucoid P. aeruginosa in individuals with cystic fibrosis.

  6. A practical guide to value of information analysis.

    PubMed

    Wilson, Edward C F

    2015-02-01

    Value of information analysis is a quantitative method to estimate the return on investment in proposed research projects. It can be used in a number of ways. Funders of research may find it useful to rank projects in terms of the expected return on investment from a variety of competing projects. Alternatively, trialists can use the principles to identify the efficient sample size of a proposed study as an alternative to traditional power calculations, and finally, a value of information analysis can be conducted alongside an economic evaluation as a quantitative adjunct to the 'future research' or 'next steps' section of a study write up. The purpose of this paper is to present a brief introduction to the methods, a step-by-step guide to calculation and a discussion of issues that arise in their application to healthcare decision making. Worked examples are provided in the accompanying online appendices as Microsoft Excel spreadsheets.

  7. Discriminant locality preserving projections based on L1-norm maximization.

    PubMed

    Zhong, Fujin; Zhang, Jiashu; Li, Defang

    2014-11-01

    Conventional discriminant locality preserving projection (DLPP) is a dimensionality reduction technique based on manifold learning, which has demonstrated good performance in pattern recognition. However, because its objective function is based on the distance criterion using L2-norm, conventional DLPP is not robust to outliers which are present in many applications. This paper proposes an effective and robust DLPP version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based locality preserving between-class dispersion and the L1-norm-based locality preserving within-class dispersion. The proposed method is proven to be feasible and also robust to outliers while overcoming the small sample size problem. The experimental results on artificial datasets, Binary Alphadigits dataset, FERET face dataset and PolyU palmprint dataset have demonstrated the effectiveness of the proposed method.

  8. Forensic characterization of camcorded movies: digital cinema vs. celluloid film prints

    NASA Astrophysics Data System (ADS)

    Rolland-Nevière, Xavier; Chupeau, Bertrand; Do"rr, Gwena"l.; Blondé, Laurent

    2012-03-01

    Digital camcording in the premises of cinema theaters is the main source of pirate copies of newly released movies. To trace such recordings, watermarking systems are exploited in order for each projection to be unique and thus identifiable. The forensic analysis to recover these marks is different for digital and legacy cinemas. To avoid running both detectors, a reliable oracle discriminating between cams originating from analog or digital projections is required. This article details a classification framework relying on three complementary features : the spatial uniformity of the screen illumination, the vertical (in)stability of the projected image, and the luminance artifacts due to the interplay between the display and acquisition devices. The system has been tuned with cams captured in a controlled environment and benchmarked against a medium-sized dataset (61 samples) composed of real-life pirate cams. Reported experimental results demonstrate that such a framework yields over 80% classification accuracy.

  9. Assessment and management of burn pain at the Komfo Anokye Teaching Hospital: a best practice implementation project.

    PubMed

    Bayuo, Jonathan; Munn, Zachary; Campbell, Jared

    2017-09-01

    Pain management is a significant issue in health facilities in Ghana. For burn patients, this is even more challenging as burn pain has varied facets. Despite the existence of pharmacological agents for pain management, complaints of pain still persist. The aim of this project was to identify pain management practices in the burns units of Komfo Anokye Teaching Hospital, compare these approaches to best practice, and implement strategies to enhance compliance to standards. Ten evidence-based audit criteria were developed from evidence summaries. Using the Joanna Briggs Institute Practical Application of Clinical Evidence Software (PACES), a baseline audit was undertaken on a convenience sample of ten patients from the day of admission to the seventh day. Thereafter, the Getting Research into Practice (GRiP) component of PACES was used to identify barriers, strategies, resources and outcomes. After implementation of the strategies, a follow-up audit was undertaken using the same sample size and audit criteria. The baseline results showed poor adherence to best practice. However, following implementation of strategies, including ongoing professional education and provision of assessment tools and protocols, compliance rates improved significantly. Atlhough the success of this project was almost disrupted by an industrial action, collaboration with external bodies enabled the successful completion of the project. Pain management practices in the burns unit improved at the end of the project which reflects the importance of an audit process, education, providing feedback, group efforts and effective collaboration.

  10. Testing Software Development Project Productivity Model

    NASA Astrophysics Data System (ADS)

    Lipkin, Ilya

    Software development is an increasingly influential factor in today's business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted. There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis. This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD. Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers. Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control, Simulation and etc... This research validates findings from previous work concerning software project productivity and leverages said results in this study. The hypothesized project productivity model provides statistical support and validation of expert opinions used by practitioners in the field of software project estimation.

  11. Preliminary findings of chemistry and bioaccessibility in base metal smelter slags.

    PubMed

    Morrison, Anthony L; Gulson, Brian L

    2007-08-15

    Leaching of toxic metals from slag waste produced during smelting of Pb-Zn ores is generally considered to be negligible. A 1.4 million tonne stockpile of slag containing up to 2.5% Pb and other contaminants has accumulated on a smelter site at North Lake Macquarie, New South Wales, Australia, and it has also been freely used within the community for landscaping and drainage projects. It had been suggested that Pb in fine particles derived from the slags may be a potential contributor to the blood Pb of some children in this community, although there is conflicting evidence in the literature for such a hypothesis. Bioaccessibility of lead and selected metals derived from nine slag samples collected from areas of public open space was examined using a relatively simple in vitro gastric dissolution technique. Size analyses of the slag samples demonstrate that finely-sized material was present in the slags which could be ingested, especially by children. The finer-sized particles contain high levels of Pb (6,490-41,400 ppm), along with Cd and As. Pb bioaccessibility of the slags was high, averaging 45% for -250 microm material and 75% for particles in the size range -53+32 microm. Increasing bioaccessibility and Pb concentration showed an inverse relationship to particle size. Almost 100% of Pb would be bioaccessible in the smallest slag particles (<20 microm), which also contained very high Pb levels ranging from 50,000 to 80,000 ppm and thus constitute a potential health hazard for children.

  12. The accuracy of matrix population model projections for coniferous trees in the Sierra Nevada, California

    USGS Publications Warehouse

    van Mantgem, P.J.; Stephenson, N.L.

    2005-01-01

    1 We assess the use of simple, size-based matrix population models for projecting population trends for six coniferous tree species in the Sierra Nevada, California. We used demographic data from 16 673 trees in 15 permanent plots to create 17 separate time-invariant, density-independent population projection models, and determined differences between trends projected from initial surveys with a 5-year interval and observed data during two subsequent 5-year time steps. 2 We detected departures from the assumptions of the matrix modelling approach in terms of strong growth autocorrelations. We also found evidence of observation errors for measurements of tree growth and, to a more limited degree, recruitment. Loglinear analysis provided evidence of significant temporal variation in demographic rates for only two of the 17 populations. 3 Total population sizes were strongly predicted by model projections, although population dynamics were dominated by carryover from the previous 5-year time step (i.e. there were few cases of recruitment or death). Fractional changes to overall population sizes were less well predicted. Compared with a null model and a simple demographic model lacking size structure, matrix model projections were better able to predict total population sizes, although the differences were not statistically significant. Matrix model projections were also able to predict short-term rates of survival, growth and recruitment. Mortality frequencies were not well predicted. 4 Our results suggest that simple size-structured models can accurately project future short-term changes for some tree populations. However, not all populations were well predicted and these simple models would probably become more inaccurate over longer projection intervals. The predictive ability of these models would also be limited by disturbance or other events that destabilize demographic rates. ?? 2005 British Ecological Society.

  13. Individualized statistical learning from medical image databases: application to identification of brain lesions.

    PubMed

    Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos

    2014-04-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Individualized Statistical Learning from Medical Image Databases: Application to Identification of Brain Lesions

    PubMed Central

    Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos

    2014-01-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564

  15. Knowledge and use of information and communication technology by health sciences students of the University of Ghana.

    PubMed

    Dery, Samuel; Vroom, Frances da-Costa; Godi, Anthony; Afagbedzi, Seth; Dwomoh, Duah

    2016-09-01

    Studies have shown that ICT adoption contributes to productivity and economic growth. It is therefore important that health workers have knowledge in ICT to ensure adoption and uptake of ICT tools to enable efficient health delivery. To determine the knowledge and use of ICT among students of the College of Health Sciences at the University of Ghana. This was a cross-sectional study conducted among students in all the five Schools of the College of Health Sciences at the University of Ghana. A total of 773 students were sampled from the Schools. Sampling proportionate to size was then used to determine the sample sizes required for each school, academic programme and level of programme. Simple random sampling was subsequently used to select students from each stratum. Computer knowledge was high among students at almost 99%. About 83% owned computers (p < 0.001) and self-rated computer knowledge was also 87 % (p <0.001). Usage was mostly for studying at 93% (p< 0.001). This study shows students have adequate knowledge and use of computers. It brings about an opportunity to introduce ICT in healthcare delivery to them. This will ensure their adequate preparedness to embrace new ways of delivering care to improve service delivery. Africa Build Project, Grant Number: FP7-266474.

  16. Interim analysis: A rational approach of decision making in clinical trial.

    PubMed

    Kumar, Amal; Chakraborty, Bhaswat S

    2016-01-01

    Interim analysis of especially sizeable trials keeps the decision process free of conflict of interest while considering cost, resources, and meaningfulness of the project. Whenever necessary, such interim analysis can also call for potential termination or appropriate modification in sample size, study design, and even an early declaration of success. Given the extraordinary size and complexity today, this rational approach helps to analyze and predict the outcomes of a clinical trial that incorporate what is learned during the course of a study or a clinical development program. Such approach can also fill the gap by directing the resources toward relevant and optimized clinical trials between unmet medical needs and interventions being tested currently rather than fulfilling only business and profit goals.

  17. The ATLAS3D project - I. A volume-limited sample of 260 nearby early-type galaxies: science goals and selection criteria

    NASA Astrophysics Data System (ADS)

    Cappellari, Michele; Emsellem, Eric; Krajnović, Davor; McDermid, Richard M.; Scott, Nicholas; Verdoes Kleijn, G. A.; Young, Lisa M.; Alatalo, Katherine; Bacon, R.; Blitz, Leo; Bois, Maxime; Bournaud, Frédéric; Bureau, M.; Davies, Roger L.; Davis, Timothy A.; de Zeeuw, P. T.; Duc, Pierre-Alain; Khochfar, Sadegh; Kuntschner, Harald; Lablanche, Pierre-Yves; Morganti, Raffaella; Naab, Thorsten; Oosterloo, Tom; Sarzi, Marc; Serra, Paolo; Weijmans, Anne-Marie

    2011-05-01

    The ATLAS3D project is a multiwavelength survey combined with a theoretical modelling effort. The observations span from the radio to the millimetre and optical, and provide multicolour imaging, two-dimensional kinematics of the atomic (H I), molecular (CO) and ionized gas (Hβ, [O III] and [N I]), together with the kinematics and population of the stars (Hβ, Fe5015 and Mg b), for a carefully selected, volume-limited (1.16 × 105 Mpc3) sample of 260 early-type (elliptical E and lenticular S0) galaxies (ETGs). The models include semi-analytic, N-body binary mergers and cosmological simulations of galaxy formation. Here we present the science goals for the project and introduce the galaxy sample and the selection criteria. The sample consists of nearby (D < 42 Mpc, |δ- 29°| < 35°, |b| > 15°) morphologically selected ETGs extracted from a parent sample of 871 galaxies (8 per cent E, 22 per cent S0 and 70 per cent spirals) brighter than MK < -21.5 mag (stellar mass M★≳ 6 ×109 M⊙). We analyse possible selection biases and we conclude that the parent sample is essentially complete and statistically representative of the nearby galaxy population. We present the size-luminosity relation for the spirals and ETGs and show that the ETGs in the ATLAS3D sample define a tight red sequence in a colour-magnitude diagram, with few objects in the transition from the blue cloud. We describe the strategy of the SAURON integral field observations and the extraction of the stellar kinematics with the pPXF method. We find typical 1σ errors of ΔV≈ 6 km s-1, Δσ≈ 7 km s-1, Δh3≈Δh4≈ 0.03 in the mean velocity, the velocity dispersion and Gauss-Hermite (GH) moments for galaxies with effective dispersion σe≳ 120 km s-1. For galaxies with lower σe (≈40 per cent of the sample) the GH moments are gradually penalized by pPXF towards zero to suppress the noise produced by the spectral undersampling and only V and σ can be measured. We give an overview of the characteristics of the other main data sets already available for our sample and of the ongoing modelling projects.

  18. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  19. Advanced High Brilliance X-Ray Source

    NASA Technical Reports Server (NTRS)

    Gibson, Walter M.

    1998-01-01

    The possibility to dramatically increase the efficiency of laboratory based protein structure measurements through the use of polycapillary X-ray optics was investigated. This project initiated April 1, 1993 and concluded December 31, 1996 (including a no cost extension from June 31, 1996). This is a final report of the project. The basis for the project is the ability to collect X-rays from divergent electron bombardment laboratory X-ray sources and redirect them into quasiparallel or convergent (focused) beams. For example, a 0.1 radian (approx. 6 deg) portion of a divergent beam collected by a polycapillary collimator and transformed into a quasiparallel beam of 3 millradian (0.2 deg) could give a gain of 6(exp 2)/0.2(exp 2) x T for the intensity of a diffracted beam from a crystal with a 0.2 deg diffraction width. T is the transmission efficiency of the polycapillary diffraction optic, and for T=0.5, the gain would be 36/0.04 x O.5=45. In practice, the effective collection angle will depend on the source spot size, the input focal length of the optic (usually limited by the source spot-to-window distance on the x-ray tube) and the size of the crystal relative to the output diameter of the optic. The transmission efficiency, T, depends on the characteristics (fractional open area, surface roughness, shape and channel diameter) of the polycapillary optic and is typically in the range 0.2-0.4. These effects could substantially reduce the expected efficiency gain. During the course of this study, the possibility to use a weakly focused beam (0.5 deg convergence) was suggested which could give an additional 10-20 X efficiency gain for small samples . Weakly focused beams from double focusing mirrors are frequently used for macromolecular crystallography studies. Furthermore the crystals are typically oscillated by as much as 2 deg during each X-ray exposure in order to increase the reciprocal space (number of crystal planes) sampled and use of a slightly convergent beam could, in principle, provide a similar sampling benefit without oscillation. Although more problematic, because of complications in analyzing the diffraction patterns, it was also suggested that even more extreme beam convergence might be used to give another order of magnitude intensity gain and even smaller focused spot size which could make it possible to study smaller protein crystals than can be studied using standard laboratory based X-ray diffraction systems. This project represents the first systematic investigation of these possibilities. As initially proposed, the contract included requirements for design, purchase, evaluation and delivery of three polycapillary lenses to the Laboratory for Structural Biology at MSFC and demonstration of such optics at MSFC for selected protein crystal diffraction applications.

  20. A General Exponential Framework for Dimensionality Reduction.

    PubMed

    Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan

    2014-02-01

    As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.

  1. Optimizing and Validating a Brief Assessment for Identifying Children of Service Members at Risk for Psychological Health Problems Following Parent Deployment

    DTIC Science & Technology

    2017-09-01

    these groups . In the 2014/2015 year, efforts focused on securing a commitment from the United States Marine Corps to host the study. In Winter 2014...we can reach an adjusted sample size target in the 2017/2018 project year by expanding our recruitment to incorporate deploying infantry groups ...Vocabulary Test Revised. Circle Pines, MN: American Guidance Service. George, C. & Solomon , J. (2008). The caregving system: A behavioral systems approach

  2. Validity of a Battery of Experimental Tests in Predicting Performance of Navy Project 100,000 Personnel

    DTIC Science & Technology

    1980-09-01

    divided into four batteries, each of which was then administered to samples ranging in size from 5,000 to 12,000 recruits. The men tested were...reject them or to accept only as many as required to fill quotas. Opponents of this policy have argued that men low on the aptitude scale could and...ratings, since such rat ings represent a high level of ac’-iovemnopt for low-scoring men . Length of Service and Attrition Characteristics To permit

  3. Manned and Unmanned Aircraft Effectiveness in Fast Attack Craft / Fast Inshore Attack Craft ASUW Kill Chain Execution

    DTIC Science & Technology

    2016-09-01

    par. 4) Based on a RED projected size of 22.16 m, a sample calculation for the unadjusted single shot probability of kill for HELLFIRE missiles is...framework based on intelligent objects (SIMIO) environment to model a fast attack craft/fast inshore attack craft anti-surface warfare expanded kill chain...concept of operation efficiency. Based on the operational environment, low cost and less capable unmanned aircraft provide an alternative to the

  4. The Outer Solar System Origins Survey (OSSOS): a status update

    NASA Astrophysics Data System (ADS)

    Kavelaars, J. J.; Bannister, Michele T.; Gladman, Brett; Petit, Jean-Marc; Gwyn, S.; Chen, Ying-Tung Charles; Alexandersen, Mike; Volk, Kat

    2015-11-01

    OSSOS is a 560 hour imaging survey using MegaPrime on the CFHT designed to produce a well characterized sample of Kuiper belt objects whose orbital and physical properties will provide useful constraints on the evolutionary history of the outer solar system. Started in 2013, this 4 year project has now entered the finally year of survey operation. With 1/2 (84 square degrees) of the observation fully analyzed, OSOSS has detected and tracked 219 TNOs brighter than our typical flux limit of r' ~ 24.5. This is 30% more detections than the entire Canada-France Ecliptic Plane Survey (CFEPS), a precursor project.Based on the first quarter of the survey the OSSOS project confirms the CFEPS-L7 orbital model of the orbital structure of the TNO population (Petit et al., 2011) and has provided additional evidence of complex structure in the size distribution of scatterin TNOs (Shankman et al., 2015). A number of the OSSOS science teams are presenting results at this meeting: Bannister et al., Benecchi et al., Fraser et al., Volk et al. on a variety of aspects of the orbital and physical properties the OSSOS detected samples. Here we present a summary of the current status of the survey: field locations, basic characterization, detection rates and some global detection statistics.More details on the OSSOS project are available from our web site: www.ossos-survey.org

  5. Evaluation of current Australian health service accreditation processes (ACCREDIT-CAP): protocol for a mixed-method research project.

    PubMed

    Hinchcliff, Reece; Greenfield, David; Moldovan, Max; Pawsey, Marjorie; Mumford, Virginia; Westbrook, Johanna Irene; Braithwaite, Jeffrey

    2012-01-01

    Accreditation programmes aim to improve the quality and safety of health services, and have been widely implemented. However, there is conflicting evidence regarding the outcomes of existing programmes. The Accreditation Collaborative for the Conduct of Research, Evaluation and Designated Investigations through Teamwork-Current Accreditation Processes (ACCREDIT-CAP) project is designed to address key gaps in the literature by evaluating the current processes of three accreditation programmes used across Australian acute, primary and aged care services. The project comprises three mixed-method studies involving documentary analyses, surveys, focus groups and individual interviews. Study samples will comprise stakeholders from across the Australian healthcare system: accreditation agencies; federal and state government departments; consumer advocates; professional colleges and associations; and staff of acute, primary and aged care services. Sample sizes have been determined to ensure results allow robust conclusions. Qualitative information will be thematically analysed, supported by the use of textual grouping software. Quantitative data will be subjected to a variety of analytical procedures, including descriptive and comparative statistics. The results are designed to inform health system policy and planning decisions in Australia and internationally. The project has been approved by the University of New South Wales Human Research Ethics Committee (approval number HREC 10274). Results will be reported to partner organisations, healthcare consumers and other stakeholders via peer-reviewed publications, conference and seminar presentations, and a publicly accessible website.

  6. A microprocessor based anti-aliasing filter for a PCM system

    NASA Technical Reports Server (NTRS)

    Morrow, D. C.; Sandlin, D. R.

    1984-01-01

    Described is the design and evaluation of a microprocessor based digital filter. The filter was made to investigate the feasibility of a digital replacement for the analog pre-sampling filters used in telemetry systems at the NASA Ames-Dryden Flight Research Facility (DFRF). The digital filter will utilize an Intel 2920 Analog Signal Processor (ASP) chip. Testing includes measurements of: (1) the filter frequency response and, (2) the filter signal resolution. The evaluation of the digital filter was made on the basis of circuit size, projected environmental stability and filter resolution. The 2920 based digital filter was found to meet or exceed the pre-sampling filter specifications for limited signal resolution applications.

  7. Efficient Sample Tracking With OpenLabFramework

    PubMed Central

    List, Markus; Schmidt, Steffen; Trojnar, Jakub; Thomas, Jochen; Thomassen, Mads; Kruse, Torben A.; Tan, Qihua; Baumbach, Jan; Mollenhauer, Jan

    2014-01-01

    The advance of new technologies in biomedical research has led to a dramatic growth in experimental throughput. Projects therefore steadily grow in size and involve a larger number of researchers. Spreadsheets traditionally used are thus no longer suitable for keeping track of the vast amounts of samples created and need to be replaced with state-of-the-art laboratory information management systems. Such systems have been developed in large numbers, but they are often limited to specific research domains and types of data. One domain so far neglected is the management of libraries of vector clones and genetically engineered cell lines. OpenLabFramework is a newly developed web-application for sample tracking, particularly laid out to fill this gap, but with an open architecture allowing it to be extended for other biological materials and functional data. Its sample tracking mechanism is fully customizable and aids productivity further through support for mobile devices and barcoded labels. PMID:24589879

  8. Experimental light scattering by small particles: system design and calibration

    NASA Astrophysics Data System (ADS)

    Maconi, Göran; Kassamakov, Ivan; Penttilä, Antti; Gritsevich, Maria; Hæggström, Edward; Muinonen, Karri

    2017-06-01

    We describe a setup for precise multi-angular measurements of light scattered by mm- to μm-sized samples. We present a calibration procedure that ensures accurate measurements. Calibration is done using a spherical sample (d = 5 mm, n = 1.517) fixed on a static holder. The ultimate goal of the project is to allow accurate multi-wavelength measurements (the full Mueller matrix) of single-particle samples which are levitated ultrasonically. The system comprises a tunable multimode Argon-krypton laser, with 12 wavelengths ranging from 465 to 676 nm, a linear polarizer, a reference photomultiplier tube (PMT) monitoring beam intensity, and several PMT:s mounted radially towards the sample at an adjustable radius. The current 150 mm radius allows measuring all azimuthal angles except for ±4° around the backward scattering direction. The measurement angle is controlled by a motor-driven rotational stage with an accuracy of 15'.

  9. Case studies of transportation investment to identify the impacts on the local and state economy.

    DOT National Transportation Integrated Search

    2013-01-01

    This project provides case studies of the impact of transportation investments on local economies. We use multiple : approaches to measure impacts since the effects of transportation projects can vary according to the size of a : project and the size...

  10. 46 CFR 160.040-2 - Type and size.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Line-Throwing Appliance, Impulse-Projected Rocket Type (and Equipment) § 160.040-2 Type and size. (a) Impulse-projected rocket type line-throwing appliances required by... and hand directed, or suitably supported and hand directed. (b) Impulse-projected rocket type line...

  11. 46 CFR 160.040-2 - Type and size.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Line-Throwing Appliance, Impulse-Projected Rocket Type (and Equipment) § 160.040-2 Type and size. (a) Impulse-projected rocket type line-throwing appliances required by... and hand directed, or suitably supported and hand directed. (b) Impulse-projected rocket type line...

  12. 46 CFR 160.040-2 - Type and size.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Line-Throwing Appliance, Impulse-Projected Rocket Type (and Equipment) § 160.040-2 Type and size. (a) Impulse-projected rocket type line-throwing appliances required by... and hand directed, or suitably supported and hand directed. (b) Impulse-projected rocket type line...

  13. 46 CFR 160.040-2 - Type and size.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Line-Throwing Appliance, Impulse-Projected Rocket Type (and Equipment) § 160.040-2 Type and size. (a) Impulse-projected rocket type line-throwing appliances required by... and hand directed, or suitably supported and hand directed. (b) Impulse-projected rocket type line...

  14. 46 CFR 160.040-2 - Type and size.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Line-Throwing Appliance, Impulse-Projected Rocket Type (and Equipment) § 160.040-2 Type and size. (a) Impulse-projected rocket type line-throwing appliances required by... and hand directed, or suitably supported and hand directed. (b) Impulse-projected rocket type line...

  15. The characterization of four gene expression analysis in circulating tumor cells made by Multiplex-PCR from the AdnaTest kit on the lab-on-a-chip Agilent DNA 1000 platform.

    PubMed

    Škereňová, Markéta; Mikulová, Veronika; Čapoun, Otakar; Zima, Tomáš

    2016-01-01

    Nowadays, on-a-chip capillary electrophoresis is a routine method for the detection of PCR fragments. The Agilent 2100 Bioanalyzer was one of the first commercial devices in this field. Our project was designed to study the characteristics of Agilent DNA 1000 kit in PCR fragment analysis as a part of circulating tumour cell (CTC) detection technique. Despite the common use of this kit a complex analysis of the results from a long-term project is still missing. A commercially available Agilent DNA 1000 kit was used as a final step in the CTC detection (AdnaTest) for the determination of the presence of PCR fragments generated by Multiplex PCR. Data from 30 prostate cancer patients obtained during two years of research were analyzed to determine the trueness and precision of the PCR fragment size determination. Additional experiments were performed to demonstrate the precision (repeatability, reproducibility) and robustness of PCR fragment concentration determination. The trueness and precision of the size determination was below 3% and 2% respectively. The repeatability of the concentration determination was below 15%. The difference in concentration determination increases when Multiplex-PCR/storage step is added between the two measurements of one sample. The characteristics established in our study are in concordance with the manufacturer's specifications established for a ladder as a sample. However, the concentration determination may vary depending on chip preparation, sample storage and concentration. The 15% variation of concentration determination repeatability was shown to be partly proportional and can be suppressed by proper normalization.

  16. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.

  17. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  18. Preparing the Production of a New Product in Small and Medium-Sized Enterprises by Using the Method of Projects Management

    NASA Astrophysics Data System (ADS)

    Bijańska, Jolanta; Wodarski, Krzysztof; Wójcik, Janusz

    2016-06-01

    Efficient and effective preparation the production of new products is important requirement for a functioning and development of small and medium-sized enterprises. One of the methods, which support the fulfilment of this condition is project management. This publication presents the results of considerations, which are aimed at developing a project management model of preparation the production of a new product, adopted to specificity of small and medium-sized enterprises.

  19. Under the Scope: Bringing Zooplankton Research into the K-12 Classroom

    NASA Astrophysics Data System (ADS)

    Cohen, J.; Petrone, C.; Wickline, A.

    2016-02-01

    Despite their small size, zooplankton are dynamic and engaging animals when viewed by researchers, teachers, and students alike. Recognizing this, we are working with K-12 teachers to develop web-based resources for using zooplankton in the classroom. This outreach effort is part of a Delaware Sea Grant-funded research project studying seasonal dynamics of zooplankton in Delaware Bay. The research team, in collaboration with a marine education specialist, initially created a website (www.underthescope.udel.edu) containing: background information on zooplankton and the research project, a magnification tool, an identification tool, and education modules that facilitate directed use of the website content and tools. Local teachers (elementary through high school) were then hosted for a workshop to engage in zooplankton sampling using methods employed in the research project, including zooplankton tows and semi-autonomous identification using a ZooScan imaging system. Teachers then explored the website, evaluating its design, content, and usability for their particular grade level. Specific suggestions from the evaluation were incorporated into the website, with additional implementation planned over the next year. This teacher- researcher partnership was successful in developing the digital resource itself, in building excitement and capacity among a cohort of teachers, and in establishing relationships among teachers and researchers to facilitate adding new dimensions to the collaboration. The latter will include zooplankton sampling by school groups, researcher optical scanning of samples with ZooScan, and subsequent student analysis and reporting on their data.

  20. A dataset describing brooding in three species of South African brittle stars, comprising seven high-resolution, micro X-ray computed tomography scans.

    PubMed

    Landschoff, Jannes; Du Plessis, Anton; Griffiths, Charles L

    2015-01-01

    Brooding brittle stars have a special mode of reproduction whereby they retain their eggs and juveniles inside respiratory body sacs called bursae. In the past, studying this phenomenon required disturbance of the sample by dissecting the adult. This caused irreversible damage and made the sample unsuitable for future studies. Micro X-ray computed tomography (μCT) is a promising technique, not only to visualise juveniles inside the bursae, but also to keep the sample intact and make the dataset of the scan available for future reference. Seven μCT scans of five freshly fixed (70 % ethanol) individuals, representing three differently sized brittle star species, provided adequate image quality to determine the numbers, sizes and postures of internally brooded young, as well as anatomy and morphology of adults. No staining agents were necessary to achieve high-resolution, high-contrast images, which permitted visualisations of both calcified and soft tissue. The raw data (projection and reconstruction images) are publicly available for download from GigaDB. Brittle stars of all sizes are suitable candidates for μCT imaging. This explicitly adds a new technique to the suite of tools available for studying the development of internally brooded young. The purpose of applying the technique was to visualise juveniles inside the adult, but because of the universally good quality of the dataset, the images can also be used for anatomical or comparative morphology-related studies of adult structures.

  1. Using pilot data to size a two-arm randomized trial to find a nearly optimal personalized treatment strategy.

    PubMed

    Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R

    2016-04-15

    A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Offshore Wind Plant Balance-of-Station Cost Drivers and Sensitivities (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saur, G.; Maples, B.; Meadows, B.

    2012-09-01

    With Balance of System (BOS) costs contributing up to 70% of the installed capital cost, it is fundamental to understanding the BOS costs for offshore wind projects as well as potential cost trends for larger offshore turbines. NREL developed a BOS model using project cost estimates developed by GL Garrad Hassan. Aspects of BOS covered include engineering and permitting, ports and staging, transportation and installation, vessels, foundations, and electrical. The data introduce new scaling relationships for each BOS component to estimate cost as a function of turbine parameters and size, project parameters and size, and soil type. Based on themore » new BOS model, an analysis to understand the non-turbine costs associated with offshore turbine sizes ranging from 3 MW to 6 MW and offshore wind plant sizes ranging from 100 MW to 1000 MW has been conducted. This analysis establishes a more robust baseline cost estimate, identifies the largest cost components of offshore wind project BOS, and explores the sensitivity of the levelized cost of energy to permutations in each BOS cost element. This presentation shows results from the model that illustrates the potential impact of turbine size and project size on the cost of energy from US offshore wind plants.« less

  3. Bon-EV: an improved multiple testing procedure for controlling false discovery rates.

    PubMed

    Li, Dongmei; Xie, Zidian; Zand, Martin; Fogg, Thomas; Dye, Timothy

    2017-01-03

    Stability of multiple testing procedures, defined as the standard deviation of total number of discoveries, can be used as an indicator of variability of multiple testing procedures. Improving stability of multiple testing procedures can help to increase the consistency of findings from replicated experiments. Benjamini-Hochberg's and Storey's q-value procedures are two commonly used multiple testing procedures for controlling false discoveries in genomic studies. Storey's q-value procedure has higher power and lower stability than Benjamini-Hochberg's procedure. To improve upon the stability of Storey's q-value procedure and maintain its high power in genomic data analysis, we propose a new multiple testing procedure, named Bon-EV, to control false discovery rate (FDR) based on Bonferroni's approach. Simulation studies show that our proposed Bon-EV procedure can maintain the high power of the Storey's q-value procedure and also result in better FDR control and higher stability than Storey's q-value procedure for samples of large size(30 in each group) and medium size (15 in each group) for either independent, somewhat correlated, or highly correlated test statistics. When sample size is small (5 in each group), our proposed Bon-EV procedure has performance between the Benjamini-Hochberg procedure and the Storey's q-value procedure. Examples using RNA-Seq data show that the Bon-EV procedure has higher stability than the Storey's q-value procedure while maintaining equivalent power, and higher power than the Benjamini-Hochberg's procedure. For medium or large sample sizes, the Bon-EV procedure has improved FDR control and stability compared with the Storey's q-value procedure and improved power compared with the Benjamini-Hochberg procedure. The Bon-EV multiple testing procedure is available as the BonEV package in R for download at https://CRAN.R-project.org/package=BonEV .

  4. Particulate matter emission by a vehicle running on unpaved road

    NASA Astrophysics Data System (ADS)

    Williams, David Scott; Shukla, Manoj K.; Ross, Jim

    2008-05-01

    The particulate matter (PM) emission from unpaved roads starts with the pulverization of surface material by the force of the vehicle, uplifting and subsequent exposure of road to strong air currents behind the wheels. The objectives of the project were to: demonstrate the utility of a simple technique for collecting suspended airborne PM emitted by vehicle running on an unpaved road, determine the mass balance of airborne PM at different heights, and determine the particle size and elemental composition of PM. We collected dust samples on sticky tapes using a rotorod sampler mounted on a tower across an unpaved road located at the Leyendecker Plant Sciences Research Center, Las Cruces, NM, USA. Dust samples were collected at 1.5, 4.5 and 6 m height above the ground surface on the east and west side of the road. One rotorod sampler was also installed at the centre of the road at 6 m height. Dust samples from unpaved road were mostly (70%) silt and clay-sized particles and were collected at all heights. The height and width of the PM plume and the amount of clay-sized particles captured on both sides of the road increased with speed and particle captured ranged from 0.05 to 159 μm. Dust particles between PM10 and PM2.5 did not correlate with vehicle speed but particles ⩽PM2.5 did. Emission factors estimated for the total suspended PM were 10147 g km-1 at 48 km h-1 and 11062 g km-1 at 64 km h-1 speed, respectively. The predominant elements detected in PM were carbon, aluminum and silica at all heights. Overall, sticky tape method coupled with electron microscopy was a useful technique for a rapid particle size and elemental characterization of airborne PM.

  5. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  6. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  7. SU-F-J-23: Field-Of-View Expansion in Cone-Beam CT Reconstruction by Use of Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haga, A; Magome, T; Nakano, M

    Purpose: Cone-beam CT (CBCT) has become an integral part of online patient setup in an image-guided radiation therapy (IGRT). In addition, the utility of CBCT for dose calculation has actively been investigated. However, the limited size of field-of-view (FOV) and resulted CBCT image with a lack of peripheral area of patient body prevents the reliability of dose calculation. In this study, we aim to develop an FOV expanded CBCT in IGRT system to allow the dose calculation. Methods: Three lung cancer patients were selected in this study. We collected the cone-beam projection images in the CBCT-based IGRT system (X-ray volumemore » imaging unit, ELEKTA), where FOV size of the provided CBCT with these projections was 410 × 410 mm{sup 2} (normal FOV). Using these projections, CBCT with a size of 728 × 728 mm{sup 2} was reconstructed by a posteriori estimation algorithm including a prior image constrained compressed sensing (PICCS). The treatment planning CT was used as a prior image. To assess the effectiveness of FOV expansion, a dose calculation was performed on the expanded CBCT image with region-of-interest (ROI) density mapping method, and it was compared with that of treatment planning CT as well as that of CBCT reconstructed by filtered back projection (FBP) algorithm. Results: A posteriori estimation algorithm with PICCS clearly visualized an area outside normal FOV, whereas the FBP algorithm yielded severe streak artifacts outside normal FOV due to under-sampling. The dose calculation result using the expanded CBCT agreed with that using treatment planning CT very well; a maximum dose difference was 1.3% for gross tumor volumes. Conclusion: With a posteriori estimation algorithm, FOV in CBCT can be expanded. Dose comparison results suggested that the use of expanded CBCTs is acceptable for dose calculation in adaptive radiation therapy. This study has been supported by KAKENHI (15K08691).« less

  8. Recent Developments and Adaptations in Diamond Wireline Core Drilling Technology

    NASA Astrophysics Data System (ADS)

    Thomas, D. M.; Nielson, D. L.; Howell, B. B.; Pardey, M.

    2001-05-01

    Scientific drilling using diamond wireline technology is presently undergoing a significant expansion and extension of activities that has allowed us to recover geologic samples that have heretofore been technically or financially unattainable. Under the direction and management of DOSECC, a high-capacity hybrid core drilling system was designed and fabricated for the Hawaii Scientific Drilling Project (HSDP) in 1998. This system, the DOSECC Hybrid Coring System (DHCS), has the capacity to recover H-sized core from depths of more than 6 km. In 1999, the DHCS completed the first phase of the HSDP to a depth of 3100 m at a substantially lower cost per foot than any previous scientific borehole to comparable depths and, in the process, established a new depth record for recovery of H-sized wireline core. This system has been offered for use in the Unzen Scientific Drilling Project, the Chicxulub (impact crater) Scientific Drilling Project, and the Geysers Deep Geothermal Reservoir Project. More recently, DOSECC has developed a smaller barge-mounted wireline core drilling system, the GLAD800, that is capable of recovering P-sized sediment core to depths of up to 800 m. The GLAD800 has been successfully deployed on Great Salt Lake and Bear Lake in Utah and is presently being mobilized to Lake Titicaca in South America for an extensive core recovery effort there. The coring capabilities of the GLAD800 system will be available to the global lakes drilling community for acquisition of sediment cores from many of the world's deep lakes for use in calibrating and refining global climate models. Presently under development by DOSECC is a heave-compensation system that will allow us to expand the capabilities of the moderate depth coring system to allow us to collect sediment and bottom core from the shallow marine environment. The design and capabilities of these coring systems will be presented along with a discussion of their potential applications for addressing a range of earth sciences questions.

  9. Anthrax Sampling and Decontamination: Technology Trade-Offs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, Phillip N.; Hamachi, Kristina; McWilliams, Jennifer

    2008-09-12

    The goal of this project was to answer the following questions concerning response to a future anthrax release (or suspected release) in a building: 1. Based on past experience, what rules of thumb can be determined concerning: (a) the amount of sampling that may be needed to determine the extent of contamination within a given building; (b) what portions of a building should be sampled; (c) the cost per square foot to decontaminate a given type of building using a given method; (d) the time required to prepare for, and perform, decontamination; (e) the effectiveness of a given decontamination methodmore » in a given type of building? 2. Based on past experience, what resources will be spent on evaluating the extent of contamination, performing decontamination, and assessing the effectiveness of the decontamination in abuilding of a given type and size? 3. What are the trade-offs between cost, time, and effectiveness for the various sampling plans, sampling methods, and decontamination methods that have been used in the past?« less

  10. Concept Study For A Near-term Mars Surface Sample Return Mission

    NASA Astrophysics Data System (ADS)

    Smith, M. F.; Thatcher, J.; Sallaberger, C.; Reedman, T.; Pillinger, C. T.; Sims, M. R.

    The return of samples from the surface of Mars is a challenging problem. Present mission planning is for complex missions to return large, focused samples sometime in the next decade. There is, however, much scientific merit in returning a small sample of Martian regolith before the end of this decade at a fraction of the cost of the more ambitious missions. This paper sets out the key elements of this concept that builds on the work of the Beagle 2 project and space robotics work in Canada. The paper will expand the science case for returning a regolith sample that is only in the range of 50-250g but would nevertheless include plenty of interesting mate- rial as the regolith comprises soil grains from a wide variety of locations i.e. nearby rocks, sedimentary formations and materials moved by fluids, winds and impacts. It is possible that a fine core sample could also be extracted and returned. The mission concept is to send a lander sized at around 130kg on the 2007 or 2009 opportunity, immediately collect the sample from the surface, launch it to Mars orbit, collect it by the lander parent craft and make an immediate Earth return. Return to Earth orbit is envisaged rather than direct Earth re-entry. The lander concept is essen- tially a twice-size Beagle 2 carrying the sample collection and return capsule loading equipment plus the ascent vehicle. The return capsule is envisaged as no more than 1kg. An overall description of the mission along with methods for sample acquisition, or- bital rendezvous and capsule return will be outlined and the overall systems budgets presented. To demonstrate the near term feasibility of the mission, the use of existing Canadian and European technologies will be highlighted.

  11. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  12. Laboratory Class Project: Using a Cichlid Fish Display Tank to Teach Students about Complex Behavioral Systems.

    PubMed

    Nolan, Brian C

    2010-01-01

    Laboratory activities serve several important functions in undergraduate science education. For neuroscience majors, an important and sometimes underemphasized tool is the use of behavioral observations to help inform us about the consequences of changes that are occurring on a neuronal level. To help address this concern, the following laboratory exercise is presented. The current project tested the prediction that the most dominant fish in a tank of cichlids will have gained the most benefits of its position resulting in the greatest growth and hence, become the largest fish. More specifically: (1) is there evidence that a social hierarchy exists among the fish in our tank based on the number of aggressive acts among the four largest fish; (2) if so, does the apparent rank correspond to the size of the fish as predicted by previous studies? Focal sampling and behavior sampling of aggressive acts between fish were utilized in the data collection. Collectively, the data suggest a social dominance hierarchy may be in place with the following rank order from highest to lowest: Fish A > Fish B > Fish D > Fish C. While the largest (Fish A) seems to be at the top, Fish C ended up being ranked lower than Fish D despite the fact that Fish C is larger. Overall, the project was considered a success by the instructor and students. The students offered several suggestions that could improve future versions of this type of project, in particular concerning the process of constructing a poster about the project. The implications of the data and student learning outcomes are discussed.

  13. Acceptability of the Urban Family Medicine Project among Physicians: A Cross-Sectional Study of Medical Offices, Iran.

    PubMed

    Kor, Elham Movahed; Rashidian, Arash; Hosseini, Mostafa; Azar, Farbod Ebadi Fard; Arab, Mohammad

    2016-10-01

    It is essential to organize private physicians in urban areas by developing urban family medicine in Iran. Acceptance of this project is currently low among physicians. The present research determined the factors affecting acceptability of the Urban Family Medicine Project among physicians working in the private sector of Mazandaran and Fars provinces in Iran. This descriptive-analytical and cross-sectional study was conducted in Mazandaran and Fars provinces. The target population was all physicians working in private offices in these regions. The sample size was calculated to be 860. The instrument contained 70 items that were modified in accordance with feedback from eight healthcare managers and a pilot sample of 50 physicians. Data was analyzed using the LISREL 8.80. The response rate was 82.21% and acceptability was almost 50% for all domains. The fit indices of the structural model were the chi-square to degree-of-freedom (2.79), normalized fit index (0.98), non-normalized fit index (0.99), comparative fit index (0.99), and root mean square error of approximation (0.05). Training facilities had no significant direct effect on acceptability; however, workload had a direct negative effect on acceptability. Other factors had direct positive effects on acceptability. Specification of the factors relating to acceptance of the project among private physicians is required to develop the project in urban areas. It is essential to upgrade the payment system, remedy cultural barriers, decrease the workload, improve the scope of practice and working conditions, and improve collaboration between healthcare professionals.

  14. Influence of volunteer and project characteristics on data quality of biological surveys.

    PubMed

    Lewandowski, Eva; Specht, Hannah

    2015-06-01

    Volunteer involvement in biological surveys is becoming common in conservation and ecology, prompting questions on the quality of data collected in such surveys. In a systematic review of the peer-reviewed literature on the quality of data collected by volunteers, we examined the characteristics of volunteers (e.g., age, prior knowledge) and projects (e.g., systematic vs. opportunistic monitoring schemes) that affect data quality with regards to standardization of sampling, accuracy and precision of data collection, spatial and temporal representation of data, and sample size. Most studies (70%, n = 71) focused on the act of data collection. The majority of assessments of volunteer characteristics (58%, n = 93) examined the effect of prior knowledge and experience on quality of the data collected, often by comparing volunteers with experts or professionals, who were usually assumed to collect higher quality data. However, when both groups' data were compared with the same accuracy standard, professional data were more accurate in only 4 of 7 cases. The few studies that measured precision of volunteer and professional data did not conclusively show that professional data were less variable than volunteer data. To improve data quality, studies recommended changes to survey protocols, volunteer training, statistical analyses, and project structure (e.g., volunteer recruitment and retention). © 2015, Society for Conservation Biology.

  15. Ecosystem size structure response to 21st century climate projection: large fish abundance decreases in the central North Pacific and increases in the California Current.

    PubMed

    Woodworth-Jefcoats, Phoebe A; Polovina, Jeffrey J; Dunne, John P; Blanchard, Julia L

    2013-03-01

    Output from an earth system model is paired with a size-based food web model to investigate the effects of climate change on the abundance of large fish over the 21st century. The earth system model, forced by the Intergovernmental Panel on Climate Change (IPCC) Special report on emission scenario A2, combines a coupled climate model with a biogeochemical model including major nutrients, three phytoplankton functional groups, and zooplankton grazing. The size-based food web model includes linkages between two size-structured pelagic communities: primary producers and consumers. Our investigation focuses on seven sites in the North Pacific, each highlighting a specific aspect of projected climate change, and includes top-down ecosystem depletion through fishing. We project declines in large fish abundance ranging from 0 to 75.8% in the central North Pacific and increases of up to 43.0% in the California Current (CC) region over the 21st century in response to change in phytoplankton size structure and direct physiological effects. We find that fish abundance is especially sensitive to projected changes in large phytoplankton density and our model projects changes in the abundance of large fish being of the same order of magnitude as changes in the abundance of large phytoplankton. Thus, studies that address only climate-induced impacts to primary production without including changes to phytoplankton size structure may not adequately project ecosystem responses. © 2012 Blackwell Publishing Ltd.

  16. Neandertal talus bones from El Sidrón site (Asturias, Spain): A 3D geometric morphometrics analysis.

    PubMed

    Rosas, Antonio; Ferrando, Anabel; Bastir, Markus; García-Tabernero, Antonio; Estalrrich, Almudena; Huguet, Rosa; García-Martínez, Daniel; Pastor, Juan Francisco; de la Rasilla, Marco

    2017-10-01

    The El Sidrón tali sample is assessed in an evolutionary framework. We aim to explore the relationship between Neandertal talus morphology and body size/shape. We test the hypothesis 1: talar Neandertal traits are influenced by body size, and the hypothesis 2: shape variables independent of body size correspond to inherited primitive features. We quantify 35 landmarks through 3D geometric morphometrics techniques to describe H. neanderthalensis-H. sapiens shape variation, by Mean Shape Comparisons, Principal Component, Phenetic Clusters, Minimum spanning tree analyses and partial least square and regression of talus shape on body variables. Shape variation correlated to body size is compared to Neandertals-Modern Humans (MH) evolutionary shape variation. The Neandertal sample is compared to early hominins. Neandertal talus presents trochlear hypertrophy, a larger equality of trochlear rims, a shorter neck, a more expanded head, curvature and an anterior location of the medial malleolar facet, an expanded and projected lateral malleolar facet and laterally expanded posterior calcaneal facet compared to MH. The Neandertal talocrural joint morphology is influenced by body size. The other Neandertal talus traits do not co-vary with it or not follow the same co-variation pattern as MH. Besides, the trochlear hypertrophy, the trochlear rims equality and the short neck could be inherited primitive features; the medial malleolar facet morphology could be an inherited primitive feature or a secondarily primitive trait; and the calcaneal posterior facet would be an autapomorphic feature of the Neandertal lineage. © 2017 Wiley Periodicals, Inc.

  17. Textural analysis of marine sediments at the USGS Woods Hole Science Center; methodology and data on DVD

    USGS Publications Warehouse

    Poppe, Lawrence J.; Williams, S. Jeffress; Paskevich, Valerie F.

    2006-01-01

    Marine sediments off the eastern United States vary markedly in texture (i.e., the size, shape, composition, and arrangement of their grains) due to a complex geologic history. For descriptive purposes, however, it is typically most useful to classify these sediments according to their grain-size distributions. In 1962, the U.S. Geological Survey began a program to study the marine geology of the continental margin off the Atlantic coast of the United States. As part of this program and numerous subsequent projects, thousands of sediment grab samples and cores were collected and analyzed for grain size at the Woods Hole Science Center. USGS Open-File Report 2005-1001 (Poppe et al., 2005), available on DVD and online, describes the field methods used to collect marine sediment samples as well as the laboratory methods used to determine and characterize grain-size distributions, and presents these data in several formats that can be readily employed by interested parties. The report is divided into three sections. The first section discusses procedures and contains pictures of the equipment, analytical flow diagrams, video clips with voice commentary, classification schemes, useful forms and compiled and uncompiled versions of the data-acquisition and data-processing software with documentation. The second section contains the grain-size data for more than 23,000 analyses in two “flat-file” formats, a data dictionary, and color-coded browse maps. The third section provides a GIS data catalog of the available point, interpretive, and baseline data layers, with FGDC-compliant metadata to help users visualize the textural information in a geographic context.

  18. Use of aerial photograph, channel-type interpretations to predict habitat availability in small streams. Restoration project 95505b. Exxon Valdez oil spill restoration project final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, R.A.

    1995-05-01

    In-stream habitats were quantified and qualified for nine stream channel-types. The channel types were identified using interpretations from stereo pairs of color and infrared aerial photographs. A total of 70 sites were sampled for streams located on the northwest portion of the Kenai Peninsula, in south-central Alaska. Channel-types were a significant predictor (P < 0.05) of the area (sq m) for 9 of 13 habitat types. Channel-types that had similar habitat composition, differed in size and depth of those habitats. Spawning habitat also appeared to be correlated to channel-type, however the within channel-type variability caused the differences to test non-significantmore » P < 0.05.« less

  19. Accurate characterisation of hole size and location by projected fringe profilometry

    NASA Astrophysics Data System (ADS)

    Wu, Yuxiang; Dantanarayana, Harshana G.; Yue, Huimin; Huntley, Jonathan M.

    2018-06-01

    The ability to accurately estimate the location and geometry of holes is often required in the field of quality control and automated assembly. Projected fringe profilometry is a potentially attractive technique on account of being non-contacting, of lower cost, and orders of magnitude faster than the traditional coordinate measuring machine. However, we demonstrate in this paper that fringe projection is susceptible to significant (hundreds of µm) measurement artefacts in the neighbourhood of hole edges, which give rise to errors of a similar magnitude in the estimated hole geometry. A mechanism for the phenomenon is identified based on the finite size of the imaging system’s point spread function and the resulting bias produced near to sample discontinuities in geometry and reflectivity. A mathematical model is proposed, from which a post-processing compensation algorithm is developed to suppress such errors around the holes. The algorithm includes a robust and accurate sub-pixel edge detection method based on a Fourier descriptor of the hole contour. The proposed algorithm was found to reduce significantly the measurement artefacts near the hole edges. As a result, the errors in estimated hole radius were reduced by up to one order of magnitude, to a few tens of µm for hole radii in the range 2–15 mm, compared to those from the uncompensated measurements.

  20. An overview of Experimental Condensed Matter Physics in Argentina by 2014, and Oxides for Non Volatile Memory Devices: The MeMOSat Project

    NASA Astrophysics Data System (ADS)

    Levy, Pablo

    2015-03-01

    In the first part of my talk, I will describe the status of the experimental research in Condensed Matter Physics in Argentina, biased towards developments related to micro and nanotechnology. In the second part, I will describe the MeMOSat Project, a consortium aimed at producing non-volatile memory devices to work in aggressive environments, like those found in the aerospace and nuclear industries. Our devices rely on the Resistive Switching mechanism, which produces a permanent but reversible change in the electrical resistance across a metal-insulator-metal structure by means of a pulsed protocol of electrical stimuli. Our project is devoted to the study of Memory Mechanisms in Oxides (MeMO) in order to establish a technological platform that tests the Resistive RAM (ReRAM) technology for aerospace applications. A review of MeMOSat's activities is presented, covering the initial Proof of Concept in ceramic millimeter sized samples; the study of different oxide-metal couples including (LaPr)2/3Ca1/3MnO, La2/3Ca1/3MnO3, YBa2Cu3O7, TiO2, HfO2, MgO and CuO; and recent miniaturized arrays of micrometer sized devices controlled by in-house designed electronics, which were launched with the BugSat01 satellite in June2014 by the argentinian company Satellogic.

  1. A fast least-squares algorithm for population inference

    PubMed Central

    2013-01-01

    Background Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual’s genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. Results We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. Conclusions The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate. PMID:23343408

  2. A fast least-squares algorithm for population inference.

    PubMed

    Parry, R Mitchell; Wang, May D

    2013-01-23

    Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual's genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate.

  3. VizieR Online Data Catalog: HI4PI spectra and column density maps (HI4PI team+, 2016)

    NASA Astrophysics Data System (ADS)

    Hi4PI Collaboration; Ben Bekhti, N.; Floeer, L.; Keller, R.; Kerp, J.; Lenz, D.; Winkel, B.; Bailin, J.; Calabretta, M. R.; Dedes, L.; Ford, H. A.; Gibson, B. K.; Haud, U.; Janowiecki, S.; Kalberla, P. M. W.; Lockman, F. J.; McClure-Griffiths, N. M.; Murphy, T.; Nakanishi, H.; Pisano, D. J.; Staveley-Smith, L.

    2016-09-01

    The HI4PI data release comprises 21-cm neutral atomic hydrogen data of the Milky Way (-600km/s0°; -470km/s

  4. A Fast Projection-Based Algorithm for Clustering Big Data.

    PubMed

    Wu, Yun; He, Zhiquan; Lin, Hao; Zheng, Yufei; Zhang, Jingfen; Xu, Dong

    2018-06-07

    With the fast development of various techniques, more and more data have been accumulated with the unique properties of large size (tall) and high dimension (wide). The era of big data is coming. How to understand and discover new knowledge from these data has attracted more and more scholars' attention and has become the most important task in data mining. As one of the most important techniques in data mining, clustering analysis, a kind of unsupervised learning, could group a set data into objectives(clusters) that are meaningful, useful, or both. Thus, the technique has played very important role in knowledge discovery in big data. However, when facing the large-sized and high-dimensional data, most of the current clustering methods exhibited poor computational efficiency and high requirement of computational source, which will prevent us from clarifying the intrinsic properties and discovering the new knowledge behind the data. Based on this consideration, we developed a powerful clustering method, called MUFOLD-CL. The principle of the method is to project the data points to the centroid, and then to measure the similarity between any two points by calculating their projections on the centroid. The proposed method could achieve linear time complexity with respect to the sample size. Comparison with K-Means method on very large data showed that our method could produce better accuracy and require less computational time, demonstrating that the MUFOLD-CL can serve as a valuable tool, at least may play a complementary role to other existing methods, for big data clustering. Further comparisons with state-of-the-art clustering methods on smaller datasets showed that our method was fastest and achieved comparable accuracy. For the convenience of most scholars, a free soft package was constructed.

  5. FR-type radio sources in COSMOS: relation of radio structure to size, accretion modes and large-scale environment

    NASA Astrophysics Data System (ADS)

    Vardoulaki, Eleni; Faustino Jimenez Andrade, Eric; Delvecchio, Ivan; Karim, Alexander; Smolčić, Vernesa; Magnelli, Benjamin; Bertoldi, Frank; Schinnener, Eva; Sargent, Mark; Finoguenov, Alexis; VLA COSMOS Team

    2018-01-01

    The radio sources associated with active galactic nuclei (AGN) can exhibit a variety of radio structures, from simple to more complex, giving rise to a variety of classification schemes. The question which still remains open, given deeper surveys revealing new populations of radio sources, is whether this plethora of radio structures can be attributed to the physical properties of the host or to the environment. Here we present an analysis on the radio structure of radio-selected AGN from the VLA-COSMOS Large Project at 3 GHz (JVLA-COSMOS; Smolčić et al.) in relation to: 1) their linear projected size, 2) the Eddington ratio, and 3) the environment their hosts lie within. We classify these as FRI (jet-like) and FRII (lobe-like) based on the FR-type classification scheme, and compare them to a sample of jet-less radio AGN in JVLA-COSMOS. We measure their linear projected sizes using a semi-automatic machine learning technique. Their Eddington ratios are calculated from X-ray data available for COSMOS. As environmental probes we take the X-ray groups (hundreds kpc) and the density fields (~Mpc-scale) in COSMOS. We find that FRII radio sources are on average larger than FRIs, which agrees with literature. But contrary to past studies, we find no dichotomy in FR objects in JVLA-COSMOS given their Eddington ratios, as on average they exhibit similar values. Furthermore our results show that the large-scale environment does not explain the observed dichotomy in lobe- and jet-like FR-type objects as both types are found on similar environments, but it does affect the shape of the radio structure introducing bents for objects closer to the centre of an X-ray group.

  6. THE EFFECT OF PROJECTION ON DERIVED MASS-SIZE AND LINEWIDTH-SIZE RELATIONSHIPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shetty, Rahul; Kauffmann, Jens; Goodman, Alyssa A.

    2010-04-01

    Power-law mass-size and linewidth-size correlations, two of 'Larson's laws', are often studied to assess the dynamical state of clumps within molecular clouds. Using the result of a hydrodynamic simulation of a molecular cloud, we investigate how geometric projection may affect the derived Larson relationships. We find that large-scale structures in the column density map have similar masses and sizes to those in the three-dimensional simulation (position-position-position, PPP). Smaller scale clumps in the column density map are measured to be more massive than the PPP clumps, due to the projection of all emitting gas along lines of sight. Further, due tomore » projection effects, structures in a synthetic spectral observation (position-position-velocity, PPV) may not necessarily correlate with physical structures in the simulation. In considering the turbulent velocities only, the linewidth-size relationship in the PPV cube is appreciably different from that measured from the simulation. Including thermal pressure in the simulated line widths imposes a minimum line width, which results in a better agreement in the slopes of the linewidth-size relationships, though there are still discrepancies in the offsets, as well as considerable scatter. Employing commonly used assumptions in a virial analysis, we find similarities in the computed virial parameters of the structures in the PPV and PPP cubes. However, due to the discrepancies in the linewidth-size and mass-size relationships in the PPP and PPV cubes, we caution that applying a virial analysis to observed clouds may be misleading due to geometric projection effects. We speculate that consideration of physical processes beyond kinetic and gravitational pressure would be required for accurately assessing whether complex clouds, such as those with highly filamentary structure, are bound.« less

  7. Methods to achieve accurate projection of regional and global raster databases

    USGS Publications Warehouse

    Usery, E.L.; Seong, J.C.; Steinwand, D.R.; Finn, M.P.

    2002-01-01

    This research aims at building a decision support system (DSS) for selecting an optimum projection considering various factors, such as pixel size, areal extent, number of categories, spatial pattern of categories, resampling methods, and error correction methods. Specifically, this research will investigate three goals theoretically and empirically and, using the already developed empirical base of knowledge with these results, develop an expert system for map projection of raster data for regional and global database modeling. The three theoretical goals are as follows: (1) The development of a dynamic projection that adjusts projection formulas for latitude on the basis of raster cell size to maintain equal-sized cells. (2) The investigation of the relationships between the raster representation and the distortion of features, number of categories, and spatial pattern. (3) The development of an error correction and resampling procedure that is based on error analysis of raster projection.

  8. An Exploratory Analysis of Projection-Standard Variables (Screen Size, Image Size and Image Contrast) in Terms of Their Effects on the Speed and Accuracy of Discrimination. Final Report.

    ERIC Educational Resources Information Center

    Metcalf, Richard M.

    Although there has been previous research concerned with image size, brightness, and contrast in projection standards, the work has lacked careful conceptualization. In this study, size was measured in terms of the visual angle subtended by the material, brightness was stated in foot-lamberts, and contrast was defined as the ratio of the…

  9. Settling characteristics of fine-grained sediments used in Louisiana coastal land building and restoration projects

    NASA Astrophysics Data System (ADS)

    Ghose Hajra, M.

    2016-02-01

    Coastal property development, sea level rise, geologic subsidence, loss of barrier islands, increasing number and intensity of coastal storms and other factors have resulted in water quality degradation, wetlands loss, reduced storm and surge protection, ground settlement, and other challenges in coastal areas throughout the world. One of the goals towards reestablishing a healthy coastal ecosystem is to rebuild wetlands with river diversion or sediment conveyance projects that optimally manage and allocate sediments, minimally impact native flora and fauna, and positively affect the water quality. Engineering properties and material characteristics of the dredged material and foundation soils are input parameters in several mathematical models used to predict the long term behavior of the dredged material and foundation soil. Therefore, proper characterization of the dredged material and foundation soils is of utmost importance in the correct design of a coastal restoration and land reclamation project. The sedimentation and consolidation characteristics of the dredged material as well as their effects on the time rate of settlement of the suspended solid particles and underlying foundation soil depend, among other factors, on the (a) grain size distribution of the dredged material, (b) salinity (fresh, brackish, or saltwater environment) of the composite slurry, and (c) concentration of the solid particles in the slurry. This paper will present the results from column settling tests and self-weight consolidation tests performed on dredged samples obtained from actual restoration projects in Louisiana. The effects of salinity, grain size distribution, and initial particle concentration on the sedimentation and consolidation parameters of the dredged material will also be discussed.

  10. Determining global distribution of microplastics by combining citizen science and in-depth case studies.

    PubMed

    Bosker, Thijs; Behrens, Paul; Vijver, Martina G

    2017-05-01

    Microplastics (<5 mm) are contaminants of emerging global concern. They have received considerable attention in scientific research, resulting in an increased awareness of the issue among politicians and the general public. However, there has been significant variation in sampling and extraction procedures used to quantify microplastics levels. The difference in extraction procedures can especially impact study outcomes, making it difficult, and sometimes impossible, to directly compare results among studies. To address this, we recently developed a standard operating procedure (SOP) for sampling microplastics on beaches. We are now assessing regional and global variations in beach microplastics using this standardized approach for 2 research projects. Our first project involves the general public through citizen science. Participants collect sand samples from beaches using a basic protocol, and we subsequently extract and quantify microplastics in a central laboratory using the SOP. Presently, we have 80+ samples from around the world and expect this number to further increase. Second, we are conducting 2, in-depth, regional case studies: one along the Dutch coast (close to major rivers, a known source of microplastic input into marine systems), and the other on the Lesser Antilles in the Caribbean (in the proximity to a hotspot of plastics in the North Atlantic Ocean). In both projects, we use our new SOP to determine regional variation in microplastics, including differences in physicochemical characteristics such as size, shape, and polymer type. Our research will provide, for the first time, a systematic comparison on levels of microplastics on beaches at both a regional and global scale. Integr Environ Assess Manag 2017;13:536-541. © 2017 SETAC. © 2017 SETAC.

  11. Stardust@home: A Massively Distributed Public Search for Interstellar Dust in the Stardust Interstellar Dust Collector

    NASA Technical Reports Server (NTRS)

    Westphal, Andrew J.; Butterworth, Anna L.; Snead, Christopher J.; Craig, Nahide; Anderson, David; Jones, Steven M.; Brownlee, Donald E.; Farnsworth, Richard; Zolensky, Michael E.

    2005-01-01

    In January 2006, the Stardust mission will return the first samples from a solid solar system body beyond the Moon. Stardust was in the news in January 2004, when it encountered comet Wild2 and captured a sample of cometary dust. But Stardust carries an equally important payload: the first samples of contemporary interstellar dust ever collected. Although it is known that interstellar (IS) dust penetrates into the inner solar system [2, 3], to date not even a single contemporary interstellar dust particle has been captured and analyzed in the laboratory. Stardust uses aerogel collectors to capture dust samples. Identification of interstellar dust impacts in the Stardust Interstellar Dust Collector probably cannot be automated, but will require the expertise of the human eye. However, the labor required for visual scanning of the entire collector would exceed the resources of any reasonably-sized research group. We are developing a project to recruit the public in the search for interstellar dust, based in part on the wildly popular SETI@home project, which has five million subscribers. We call the project Stardust@home. Using sophisticated chemical separation techniques, certain types of refractory ancient IS particles (so-called presolar grains) have been isolated from primitive meteorites (e.g., [4] ). Recently, presolar grains have been identified in Interplanetary Dust Particles[6]. Because these grains are not isolated chemically, but are recognized only by their unusual isotopic compositions, they are probably less biased than presolar grains isolated from meteorites. However, it is entirely possible that the typical interstellar dust particle is isotopically solar in composition. The Stardust collection of interstellar dust will be the first truly unbiased one.

  12. Assessment of Contribution of Contemporary Carbon Sources to Size-Fractionated Particulate Matter and Time-Resolved Bulk Particulate Matter Using the Measurement of Radiocarbon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, H M; Young, T M; Buchholz, B A

    2009-04-16

    This study was motivated by a desire to improve understanding of the sources contributing to the carbon that is an important component of airborne particulate matter (PM). The ultimate goal of this project was to lay a ground work for future tools that might be easily implemented with archived or routinely collected samples. A key feature of this study was application of radiocarbon measurement that can be interpreted to indicate the relative contributions from fossil and non-fossil carbon sources of atmospheric PM. Size-resolved PM and time-resolved PM{sub 10} collected from a site in Sacramento, CA in November 2007 (Phase I)more » and March 2008 (Phase II) were analyzed for radiocarbon and source markers such as levoglucosan, cholesterol, and elemental carbon. Radiocarbon data indicates that the contributions of non-fossil carbon sources were much greater than that from fossil carbon sources in all samples. Radiocarbon and source marker measurements confirm that a greater contribution of non-fossil carbon sources in Phase I samples was highly likely due to residential wood combustion. The present study proves that measurement of radiocarbon and source markers can be readily applied to archived or routinely collected samples for better characterization of PM sources. More accurate source apportionment will support ARB in developing more efficient control strategies.« less

  13. Short hypervariable microhaplotypes: A novel set of very short high discriminating power loci without stutter artefacts.

    PubMed

    van der Gaag, Kristiaan J; de Leeuw, Rick H; Laros, Jeroen F J; den Dunnen, Johan T; de Knijff, Peter

    2018-07-01

    Since two decades, short tandem repeats (STRs) are the preferred markers for human identification, routinely analysed by fragment length analysis. Here we present a novel set of short hypervariable autosomal microhaplotypes (MH) that have four or more SNPs in a span of less than 70 nucleotides (nt). These MHs display a discriminating power approaching that of STRs and provide a powerful alternative for the analysis;1;is of forensic samples that are problematic when the STR fragment size range exceeds the integrity range of severely degraded DNA or when multiple donors contribute to an evidentiary stain and STR stutter artefacts complicate profile interpretation. MH typing was developed using the power of massively parallel sequencing (MPS) enabling new powerful, fast and efficient SNP-based approaches. MH candidates were obtained from queries in data of the 1000 Genomes, and Genome of the Netherlands (GoNL) projects. Wet-lab analysis of 276 globally dispersed samples and 97 samples of nine large CEPH families assisted locus selection and corroboration of informative value. We infer that MHs represent an alternative marker type with good discriminating power per locus (allowing the use of a limited number of loci), small amplicon sizes and absence of stutter artefacts that can be especially helpful when unbalanced mixed samples are submitted for human identification. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Predicting discovery rates of genomic features.

    PubMed

    Gravel, Simon

    2014-06-01

    Successful sequencing experiments require judicious sample selection. However, this selection must often be performed on the basis of limited preliminary data. Predicting the statistical properties of the final sample based on preliminary data can be challenging, because numerous uncertain model assumptions may be involved. Here, we ask whether we can predict "omics" variation across many samples by sequencing only a fraction of them. In the infinite-genome limit, we find that a pilot study sequencing 5% of a population is sufficient to predict the number of genetic variants in the entire population within 6% of the correct value, using an estimator agnostic to demography, selection, or population structure. To reach similar accuracy in a finite genome with millions of polymorphisms, the pilot study would require ∼15% of the population. We present computationally efficient jackknife and linear programming methods that exhibit substantially less bias than the state of the art when applied to simulated data and subsampled 1000 Genomes Project data. Extrapolating based on the National Heart, Lung, and Blood Institute Exome Sequencing Project data, we predict that 7.2% of sites in the capture region would be variable in a sample of 50,000 African Americans and 8.8% in a European sample of equal size. Finally, we show how the linear programming method can also predict discovery rates of various genomic features, such as the number of transcription factor binding sites across different cell types. Copyright © 2014 by the Genetics Society of America.

  15. An ensemble predictive modeling framework for breast cancer classification.

    PubMed

    Nagarajan, Radhakrishnan; Upreti, Meenakshi

    2017-12-01

    Molecular changes often precede clinical presentation of diseases and can be useful surrogates with potential to assist in informed clinical decision making. Recent studies have demonstrated the usefulness of modeling approaches such as classification that can predict the clinical outcomes from molecular expression profiles. While useful, a majority of these approaches implicitly use all molecular markers as features in the classification process often resulting in sparse high-dimensional projection of the samples often comparable to that of the sample size. In this study, a variant of the recently proposed ensemble classification approach is used for predicting good and poor-prognosis breast cancer samples from their molecular expression profiles. In contrast to traditional single and ensemble classifiers, the proposed approach uses multiple base classifiers with varying feature sets obtained from two-dimensional projection of the samples in conjunction with a majority voting strategy for predicting the class labels. In contrast to our earlier implementation, base classifiers in the ensembles are chosen based on maximal sensitivity and minimal redundancy by choosing only those with low average cosine distance. The resulting ensemble sets are subsequently modeled as undirected graphs. Performance of four different classification algorithms is shown to be better within the proposed ensemble framework in contrast to using them as traditional single classifier systems. Significance of a subset of genes with high-degree centrality in the network abstractions across the poor-prognosis samples is also discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Ghost imaging based on Pearson correlation coefficients

    NASA Astrophysics Data System (ADS)

    Yu, Wen-Kai; Yao, Xu-Ri; Liu, Xue-Feng; Li, Long-Zhen; Zhai, Guang-Jie

    2015-05-01

    Correspondence imaging is a new modality of ghost imaging, which can retrieve a positive/negative image by simple conditional averaging of the reference frames that correspond to relatively large/small values of the total intensity measured at the bucket detector. Here we propose and experimentally demonstrate a more rigorous and general approach in which a ghost image is retrieved by calculating a Pearson correlation coefficient between the bucket detector intensity and the brightness at a given pixel of the reference frames, and at the next pixel, and so on. Furthermore, we theoretically provide a statistical interpretation of these two imaging phenomena, and explain how the error depends on the sample size and what kind of distribution the error obeys. According to our analysis, the image signal-to-noise ratio can be greatly improved and the sampling number reduced by means of our new method. Project supported by the National Key Scientific Instrument and Equipment Development Project of China (Grant No. 2013YQ030595) and the National High Technology Research and Development Program of China (Grant No. 2013AA122902).

  17. Elucidating the ensemble of functionally-relevant transitions in protein systems with a robotics-inspired method.

    PubMed

    Molloy, Kevin; Shehu, Amarda

    2013-01-01

    Many proteins tune their biological function by transitioning between different functional states, effectively acting as dynamic molecular machines. Detailed structural characterization of transition trajectories is central to understanding the relationship between protein dynamics and function. Computational approaches that build on the Molecular Dynamics framework are in principle able to model transition trajectories at great detail but also at considerable computational cost. Methods that delay consideration of dynamics and focus instead on elucidating energetically-credible conformational paths connecting two functionally-relevant structures provide a complementary approach. Effective sampling-based path planning methods originating in robotics have been recently proposed to produce conformational paths. These methods largely model short peptides or address large proteins by simplifying conformational space. We propose a robotics-inspired method that connects two given structures of a protein by sampling conformational paths. The method focuses on small- to medium-size proteins, efficiently modeling structural deformations through the use of the molecular fragment replacement technique. In particular, the method grows a tree in conformational space rooted at the start structure, steering the tree to a goal region defined around the goal structure. We investigate various bias schemes over a progress coordinate for balance between coverage of conformational space and progress towards the goal. A geometric projection layer promotes path diversity. A reactive temperature scheme allows sampling of rare paths that cross energy barriers. Experiments are conducted on small- to medium-size proteins of length up to 214 amino acids and with multiple known functionally-relevant states, some of which are more than 13Å apart of each-other. Analysis reveals that the method effectively obtains conformational paths connecting structural states that are significantly different. A detailed analysis on the depth and breadth of the tree suggests that a soft global bias over the progress coordinate enhances sampling and results in higher path diversity. The explicit geometric projection layer that biases the exploration away from over-sampled regions further increases coverage, often improving proximity to the goal by forcing the exploration to find new paths. The reactive temperature scheme is shown effective in increasing path diversity, particularly in difficult structural transitions with known high-energy barriers.

  18. Field and laboratory procedures used in a soil chronosequence study

    USGS Publications Warehouse

    Singer, Michael J.; Janitzky, Peter

    1986-01-01

    In 1978, the late Denis Marchand initiated a research project entitled "Soil Correlation and Dating at the U.S. Geological Survey" to determine the usefulness of soils in solving geologic problems. Marchand proposed to establish soil chronosequences that could be dated independently of soil development by using radiometric and other numeric dating methods. In addition, by comparing dated chronosequences in different environments, rates of soil development could be studied and compared among varying climates and mineralogical conditions. The project was fundamental in documenting the value of soils in studies of mapping, correlating, and dating late Cenozoic deposits and in studying soil genesis. All published reports by members of the project are included in the bibliography.The project demanded that methods be adapted or developed to ensure comparability over a wide variation in soil types. Emphasis was placed on obtaining professional expertise and on establishing consistent techniques, especially for the field, laboratory, and data-compilation methods. Since 1978, twelve chronosequences have been sampled and analyzed by members of this project, and methods have been established and used consistently for analysis of the samples.The goals of this report are to:Document the methods used for the study on soil chronosequences,Present the results of tests that were run for precision, accuracy, and effectiveness, andDiscuss our modifications to standard procedures.Many of the methods presented herein are standard and have been reported elsewhere. However, we assume less prior analytical knowledge in our descriptions; thus, the manual should be easy to follow for the inexperienced analyst. Each chapter presents one or more references of the basic principle, an equipment and reagents list, and the detailed procedure. In some chapters this is followed by additional remarks or example calculations.The flow diagram in figure 1 outlines the step-by-step procedures used to obtain and analyze soil samples for this study. The soils analyzed had a wide range of characteristics (such as clay content, mineralogy, salinity, and acidity). Initially, a major task was to test and select methods that could be applied and interpreted similarly for the various types of soils. Tests were conducted to establish the effectiveness and comparability of analytical techniques, and the data for such tests are included in figures, tables, and discussions. In addition, many replicate analyses of samples have established a "standard error" or "coefficient of variance" which indicates the average reproducibility of each laboratory procedure. These averaged errors are reported as percentage of a given value. For example, in particle-size determination, 3 percent error for 10 percent clay content equals 10 ± 0.3 percent clay. The error sources were examined to determine, for example, if the error in particle-size determination was dependent on clay content. No such biases were found, and data are reported as percent error in the text and in tables of reproducibility.

  19. ICESCAPE Mission

    NASA Image and Video Library

    2010-07-03

    Benny Hopson from the Barrow (Alaska) Arctic Science Consortium drills a core sample from sea ice in the Chukchi Sea on July 4, 2010. The core is sliced up into puck-sized sections and stored onboard the U.S. Coast Guard Healy for analysis in the ship's lab. Impacts of Climate change on the Eco-Systems and Chemistry of the Arctic Pacific Environment (ICESCAPE) is a multi-year NASA shipborne project. The bulk of the research will take place in the Beaufort and Chukchi Sea’s in summer of 2010 and fall of 2011. Photo Credit: (NASA/Kathryn Hansen)

  20. An analysis of adaptive design variations on the sequential parallel comparison design for clinical trials.

    PubMed

    Mi, Michael Y; Betensky, Rebecca A

    2013-04-01

    Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample-size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Because the basic SPCD already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and whether we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample-size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample-size re-estimation, up to 25% power was recovered from underestimated sample-size scenarios. Given the numerous possible test parameters that could have been chosen for the simulations, the study's results are limited to situations described by the parameters that were used and may not generalize to all possible scenarios. Furthermore, dropout of patients is not considered in this study. It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments.

  1. An analysis of adaptive design variations on the sequential parallel comparison design for clinical trials

    PubMed Central

    Mi, Michael Y.; Betensky, Rebecca A.

    2013-01-01

    Background Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Purpose Because the basic SPCD design already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and if we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Methods Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. Results From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample size re-estimation, up to 25% power was recovered from underestimated sample size scenarios. Limitations Given the numerous possible test parameters that could have been chosen for the simulations, the study’s results are limited to situations described by the parameters that were used, and may not generalize to all possible scenarios. Furthermore, drop-out of patients is not considered in this study. Conclusions It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments. PMID:23283576

  2. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  4. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  5. Uncertainties in detecting decadal change in extractable soil elements in Northern Forests

    NASA Astrophysics Data System (ADS)

    Bartlett, O.; Bailey, S. W.; Ducey, M. J.

    2016-12-01

    Northern Forest ecosystems have been or are being impacted by land use change, forest harvesting, acid deposition, atmospheric CO2 enrichment, and climate change. Each of these has the potential to modify soil forming processes, and the resulting chemical stocks. Horizontal and vertical variations in concentrations complicate determination of temporal change. This study evaluates sample design, sample size, and differences among observers as sources of uncertainty when quantifying soil temporal change over regional scales. Forty permanent, northern hardwood, monitoring plots were established on the White Mountain National Forest in central New Hampshire and western Maine. Soil pits were characterized and sampled by genetic horizon at plot center in 2001 and resampled again in 2014 two-meters on contour from the original sampling location. Each soil horizon was characterized by depth, color, texture, structure, consistency, boundaries, coarse fragments, and roots from the forest floor to the upper C horizon, the relatively unaltered glacial till parent material. Laboratory analyses included pH in 0.01 M CaCl2 solution and extractable Ca, Mg, Na, K, Al, Mn, and P in 1 M NH4OAc solution buffered at pH 4.8. Significant elemental differences were identified by genetic horizon from paired t-tests (p ≤ 0.05) indicate temporal change across the study region. Power analysis, 0.9 power (α = 0.05), revealed sampling size was appropriate within this region to detect concentration change by genetic horizon using a stratified sample design based on topographic metrics. There were no significant differences between observers' descriptions of physical properties. As physical properties would not be expected to change over a decade, this suggests spatial variation in physical properties between the pairs of sampling pits did not detract from our ability to detect temporal change. These results suggest that resampling efforts within a site, repeated across a region, to quantify elemental change by carefully described genetic horizons is an appropriate method of detecting soil temporal change in this region. Sample size and design considerations from this project will have direct implications for future monitoring programs to characterize change in soil chemistry.

  6. The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education

    ERIC Educational Resources Information Center

    Slavin, Robert; Smith, Dewi

    2009-01-01

    Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…

  7. Microplastic distribution in global marine surface waters: results of an extensive citizen science study

    NASA Astrophysics Data System (ADS)

    Barrows, A.; Petersen, C.

    2017-12-01

    Plastic is a major pollutant throughout the world. The majority of the 322 million tons produced annually is used for single-use packaging. What makes plastic an attractive packaging material: cheap, light-weight and durable are also the features that help make it a common and persistent pollutant. There is a growing body of research on microplastic, particles less than 5 mm in size. Microfibers are the most common microplastic in the marine environment. Global estimates of marine microplastic surface concentrations are based on relatively small sample sizes when compared to the vast geographic scale of the ocean. Microplastic residence time and movement along the coast and sea surface outside of the gyres is still not well researched. This five-year project utilized global citizen scientists to collect 1,628 1-liter surface grab samples in every major ocean. The Artic and Southern oceans contained highest average of particles per liter of surface water. Open ocean samples (further than 12 nm from land, n = 686) contained a higher particle average (17 pieces L-1) than coastal samples (n = 723) 6 pieces L-1. Particles were predominantly 100 µm- 1.5 mm in length (77%), smaller than what has been captured in the majority of surface studies. Utilization of citizen scientists to collect data both in fairly accessible regions of the world as well as from areas hard to reach and therefore under sampled, provides us with a wider perspective of global microplastics occurrence. Our findings confirm global microplastic accumulation zone model predictions. The open ocean and poles have sequestered and trapped plastic for over half a century, and show that not only plastics, but anthropogenic fibers are polluting the environment. Continuing to fill knowledge gaps on microplastic shape, size and color in remote ocean areas will drive more accurate oceanographic models of plastic accumulation zones. Incorporation of smaller-sized particles in these models, which has previously been lacking, will help to better understand potential fate and transformation microplastic and anthropogenic particles in the marine environment.

  8. The impact of HIV / AIDS on Kenya's commercial sector.

    PubMed

    Forsythe, S; Roberts, M

    1995-02-01

    AIDSCAP is undertaking a project designed to encourage Kenya's private sector to participate in HIV/AIDS prevention efforts. Part of that project involves estimating the impact of HIV/AIDS on Kenya's commercial sector. AIDSCAP and Kenyan researcher estimates are based upon financial data from a sugar estate, a transportation company, a wood processing plant, a textile factory, and a light manufacturing company, a small sample representing the diversity of industries in the country. Most are medium-sized companies with 1200-2200 employees. Preliminary results suggest that absenteeism, training costs, and HIV-related health care will cause the greatest losses to Kenyan businesses. Projections show that the HIV/AIDS epidemic could increase labor costs for some Kenyan businesses by 17% by the year 2005. Despite increasing labor costs, however, the epidemic may not cause a significant drop in profits for larger, capital-intensive Kenyan businesses. Some companies could still find their profits cut by 15-25% within the next 10 years. Study findings and implications for workplace prevention programs are discussed.

  9. Dataset of MIGRAME Project (Global Change, Altitudinal Range Shift and Colonization of Degraded Habitats in Mediterranean Mountains)

    PubMed Central

    Pérez-Luque, Antonio Jesús; Zamora, Regino; Bonet, Francisco Javier; Pérez-Pérez, Ramón

    2015-01-01

    Abstract In this data paper, we describe the dataset of the Global Change, Altitudinal Range Shift and Colonization of Degraded Habitats in Mediterranean Mountains (MIGRAME) project, which aims to assess the capacity of altitudinal migration and colonization of marginal habitats by Quercus pyrenaica Willd. forests in Sierra Nevada (southern Spain) considering two global-change drivers: temperature increase and land-use changes. The dataset includes information of the forest structure (diameter size, tree height, and abundance) of the Quercus pyrenaica ecosystem in Sierra Nevada obtained from 199 transects sampled at the treeline ecotone, mature forest, and marginal habitats (abandoned cropland and pine plantations). A total of 3839 occurrence records were collected and 5751 measurements recorded. The dataset is included in the Sierra Nevada Global-Change Observatory (OBSNEV), a long-term research project designed to compile socio-ecological information on the major ecosystem types in order to identify the impacts of global change in this mountain range. PMID:26491387

  10. Phylogenetic effective sample size.

    PubMed

    Bartoszek, Krzysztof

    2016-10-21

    In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Overestimation of the Projected Size of Objects on the Surface of Mirrors and Windows

    ERIC Educational Resources Information Center

    Lawson, Rebecca; Bertamini, Marco; Liu, Dan

    2007-01-01

    Four experiments investigated judgments of the size of projections of objects on the glass surface of mirrors and windows. The authors tested different ways of explaining the task to overcome the difficulty that people had in understanding what the projection was, and they varied the distance of the observer and the object to the mirror or window…

  12. Robust DNA Isolation and High-throughput Sequencing Library Construction for Herbarium Specimens.

    PubMed

    Saeidi, Saman; McKain, Michael R; Kellogg, Elizabeth A

    2018-03-08

    Herbaria are an invaluable source of plant material that can be used in a variety of biological studies. The use of herbarium specimens is associated with a number of challenges including sample preservation quality, degraded DNA, and destructive sampling of rare specimens. In order to more effectively use herbarium material in large sequencing projects, a dependable and scalable method of DNA isolation and library preparation is needed. This paper demonstrates a robust, beginning-to-end protocol for DNA isolation and high-throughput library construction from herbarium specimens that does not require modification for individual samples. This protocol is tailored for low quality dried plant material and takes advantage of existing methods by optimizing tissue grinding, modifying library size selection, and introducing an optional reamplification step for low yield libraries. Reamplification of low yield DNA libraries can rescue samples derived from irreplaceable and potentially valuable herbarium specimens, negating the need for additional destructive sampling and without introducing discernible sequencing bias for common phylogenetic applications. The protocol has been tested on hundreds of grass species, but is expected to be adaptable for use in other plant lineages after verification. This protocol can be limited by extremely degraded DNA, where fragments do not exist in the desired size range, and by secondary metabolites present in some plant material that inhibit clean DNA isolation. Overall, this protocol introduces a fast and comprehensive method that allows for DNA isolation and library preparation of 24 samples in less than 13 h, with only 8 h of active hands-on time with minimal modifications.

  13. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhary, Kenny; Najm, Habib N.

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  14. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE PAGES

    Chowdhary, Kenny; Najm, Habib N.

    2016-04-13

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  15. Fully automatic characterization and data collection from crystals of biological macromolecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander

    A fully automatic system has been developed that performs X-ray centring and characterization of, and data collection from, large numbers of cryocooled crystals without human intervention. Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to themore » optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.« less

  16. Snow grain size and shape distributions in northern Canada

    NASA Astrophysics Data System (ADS)

    Langlois, A.; Royer, A.; Montpetit, B.; Roy, A.

    2016-12-01

    Pioneer snow work in the 1970s and 1980s proposed new approaches to retrieve snow depth and water equivalent from space using passive microwave brightness temperatures. Numerous research work have led to the realization that microwave approaches depend strongly on snow grain morphology (size and shape), which was poorly parameterized since recently, leading to strong biases in the retrieval calculations. Related uncertainties from space retrievals and the development of complex thermodynamic multilayer snow and emission models motivated several research works on the development of new approaches to quantify snow grain metrics given the lack of field measurements arising from the sampling constraints of such variable. This presentation focuses on the unknown size distribution of snow grain sizes. Our group developed a new approach to the `traditional' measurements of snow grain metrics where micro-photographs of snow grains are taken under angular directional LED lighting. The projected shadows are digitized so that a 3D reconstruction of the snow grains is possible. This device has been used in several field campaigns and over the years a very large dataset was collected and is presented in this paper. A total of 588 snow photographs from 107 snowpits collected during the European Space Agency (ESA) Cold Regions Hydrology high-resolution Observatory (CoReH2O) mission concept field campaign, in Churchill, Manitoba Canada (January - April 2010). Each of the 588 photographs was classified as: depth hoar, rounded, facets and precipitation particles. A total of 162,516 snow grains were digitized across the 588 photographs, averaging 263 grains/photo. Results include distribution histograms for 5 `size' metrics (projected area, perimeter, equivalent optical diameter, minimum axis and maximum axis), and 2 `shape' metrics (eccentricity, major/minor axis ratio). Different cumulative histograms are found between the grain types, and proposed fits are presented with the Kernel distribution function. Finally, a comparison with the Specific Surface Area (SSA) derived from reflectance values using the Infrared Integrating Sphere (IRIS) highlight different power statistical fits for the 5 `size' metrics.

  17. Topsoil depth substantially influences the responses to drought of the foliar metabolomes of Mediterranean forests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivas-Ubach, Albert; Barbeta, Adrià; Sardans, Jordi

    Soils provide physical support, water, and nutrients to terrestrial plants. Upper soil layers are crucial for forest dynamics, especially under drought conditions, because many biological processes occur there and provide support, water and nutrients to terrestrial plants. We postulated that tree size and overall plant function manifested in the metabolome composition, the total set of metabolites, were dependent on the depth of upper soil layers and on water availability. We sampled leaves for stoichiometric and metabolomic analyses once per season from differently sized Quercus ilex trees under natural and experimental drought conditions as projected for the coming decades. Different sizedmore » trees had different metabolomes and plots with shallower soils had smaller trees. Soil moisture of the upper soil did not explain the tree size and smaller trees did not show higher concentrations of biomarker metabolites related to drought stress. However, the impact of drought treatment on metabolomes was higher in smaller trees in shallower soils. Our results suggested that tree size was more dependent on the depth of the upper soil layers, which indirectly affect the metabolomes of the trees, than on the moisture content of the upper soil layers. Metabolomic profiling of Q. ilex supported the premise that water availability in the upper soil layers was not necessarily correlated with tree size. The higher impact of drought on trees growing in shallower soils nevertheless indicates a higher vulnerability of small trees to the future increase in frequency, intensity, and duration of drought projected for the Mediterranean Basin and other areas. Metabolomics has proven to be an excellent tool detecting significant metabolic changes among differently sized individuals of the same species and it improves our understanding of the connection between plant metabolomes and environmental variables such as soil depth and moisture content.« less

  18. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  19. Poster — Thur Eve — 01: The effect of the number of projections on MTF and CNR in Compton scatter tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chighvinadze, T; Pistorius, S; CancerCare Manitoba, Winnipeg, MB

    2014-08-15

    Purpose: To investigate the dependence of the reconstructed image quality on the number of projections in multi-projection Compton scatter tomography (MPCST). The conventional relationship between the projection number used for reconstruction and reconstructed image quality pertained to CT does not necessarily apply to MPCST, which can produce images from a single projection if the detectors have sufficiently high energy and spatial resolution. Methods: The electron density image was obtained using filtered-backprojection of the scatter signal over circular arcs formed using Compton equation. The behavior of the reconstructed image quality as a function of the projection number was evaluated through analyticalmore » simulations and characterized by CNR and MTF. Results: The increase of the projection number improves the contrast with this dependence being a function of fluence. The number of projections required to approach the asymptotic maximum contrast decreases as the fluence increases. Increasing projection number increases the CNR but not spatial resolution. Conclusions: For MPCST using a 500eV energy resolution and a 2×2mm{sup 2} size detector, an adequate image quality can be obtained with a small number of projections provided the incident fluence is high enough. This is conceptually different from conventional CT where a minimum number of projections is required to obtain an adequate image quality. While increasing projection number, even for the lowest dose value, the CNR increases even though the number of photons per projection decreases. The spatial resolution of the image is improved by increasing the sampling within a projection rather than by increasing the number of projections.« less

  20. Accounting for twin births in sample size calculations for randomised trials.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J

    2018-05-04

    Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.

  1. Customized Mobile Apps: Improving data collection methods in large-scale field works in Finnish Lapland

    NASA Astrophysics Data System (ADS)

    Kupila, Juho

    2017-04-01

    Since the 1990s, a huge amount of data related to the groundwater and soil has been collected in several regional projects in Finland. EU -funded project "The coordination of groundwater protection and aggregates industry in Finnish Lapland, phase II" started in July 2016 and it covers the last unstudied areas in these projects in Finland. Project is carried out by Geological Survey of Finland (GTK), University of Oulu and Finnish Environment Institute and the main topic is to consolidate the groundwater protection and extractable use of soil resource in Lapland area. As earlier, several kinds of studies are also carried out throughout this three-year research and development project. These include e.g. drilling with setting up of groundwater observation wells, GPR-survey and many kinds of point-type observations, like sampling and general mapping on the field. Due to size of a study area (over 80 000 km2, about one quarter of a total area of Finland), improvement of the field work methods has become essential. To the general observation on the field, GTK has developed a specific mobile applications for Android -devices. With these Apps, data can be easily collected for example from a certain groundwater area and then uploaded directly to the GTK's database. Collected information may include sampling data, photos, layer observations, groundwater data etc. and it is all linked to the current GPS-location. New data is also easily available for post-processing. In this project the benefits of these applications will be field-tested and e.g. ergonomics, economy and usability in general will be taken account and related to the other data collecting methods, like working with heavy fieldwork laptops. Although these Apps are designed for usage in GTK's projects, they are free to download from Google Play for anyone interested. Geological Survey of Finland has the main role in this project with support from national and local authorities and stakeholders. Project is funded by European Regional Development Fund with support from local communes, branch enterprises and executive quarters of the project. Implementation period is 2016-2019.

  2. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  3. Inexpensive and Highly Reproducible Cloud-Based Variant Calling of 2,535 Human Genomes

    PubMed Central

    Shringarpure, Suyash S.; Carroll, Andrew; De La Vega, Francisco M.; Bustamante, Carlos D.

    2015-01-01

    Population scale sequencing of whole human genomes is becoming economically feasible; however, data management and analysis remains a formidable challenge for many research groups. Large sequencing studies, like the 1000 Genomes Project, have improved our understanding of human demography and the effect of rare genetic variation in disease. Variant calling on datasets of hundreds or thousands of genomes is time-consuming, expensive, and not easily reproducible given the myriad components of a variant calling pipeline. Here, we describe a cloud-based pipeline for joint variant calling in large samples using the Real Time Genomics population caller. We deployed the population caller on the Amazon cloud with the DNAnexus platform in order to achieve low-cost variant calling. Using our pipeline, we were able to identify 68.3 million variants in 2,535 samples from Phase 3 of the 1000 Genomes Project. By performing the variant calling in a parallel manner, the data was processed within 5 days at a compute cost of $7.33 per sample (a total cost of $18,590 for completed jobs and $21,805 for all jobs). Analysis of cost dependence and running time on the data size suggests that, given near linear scalability, cloud computing can be a cheap and efficient platform for analyzing even larger sequencing studies in the future. PMID:26110529

  4. Photovoltaic Enhancement with Ferroelectric HfO2Embedded in the Structure of Solar Cells

    NASA Astrophysics Data System (ADS)

    Eskandari, Rahmatollah; Malkinski, Leszek

    Enhancing total efficiency of the solar cells is focused on the improving one or all of the three main stages of the photovoltaic effect: absorption of the light, generation of the carriers and finally separation of the carriers. Ferroelectric photovoltaic designs target the last stage with large electric forces from polarized ferroelectric films that can be larger than band gap of the material and the built-in electric fields in semiconductor bipolar junctions. In this project we have fabricated very thin ferroelectric HfO2 films ( 10nm) doped with silicon using RF sputtering method. Doped HfO2 films were capped between two TiN layers ( 20nm) and annealed at temperatures of 800ºC and 1000ºC and Si content was varied between 6-10 mol. % using different size of mounted Si chip on hafnium target. Piezoforce microscopy (PFM) method proved clear ferroelectric properties in samples with 6 mol. % of Si that were annealed at 800ºC. Ferroelectric samples were poled in opposite directions and embedded in the structure of a cell and an enhancement in photovoltaic properties were observed on the poled samples vs unpoled ones with KPFM and I-V measurements. The current work is funded by the NSF EPSCoR LA-SiGMA project under award #EPS-1003897.

  5. Public Opinion Polls, Chicken Soup and Sample Size

    ERIC Educational Resources Information Center

    Nguyen, Phung

    2005-01-01

    Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.

  6. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  7. IOTA: the array controller for a gigapixel OTCCD camera for Pan-STARRS

    NASA Astrophysics Data System (ADS)

    Onaka, Peter; Tonry, John; Luppino, Gerard; Lockhart, Charles; Lee, Aaron; Ching, Gregory; Isani, Sidik; Uyeshiro, Robin

    2004-09-01

    The PanSTARRS project has undertaken an ambitious effort to develop a completely new array controller architecture that is fundamentally driven by the large 1gigapixel, low noise, high speed OTCCD mosaic requirements as well as the size, power and weight restrictions of the PanSTARRS telescope. The result is a very small form factor next generation controller scalar building block with 1 Gigabit Ethernet interfaces that will be assembled into a system that will readout 512 outputs at ~1 Megapixel sample rates on each output. The paper will also discuss critical technology and fabrication techniques such as greater than 1MHz analog to digital converters (ADCs), multiple fast sampling and digital calculation of multiple correlated samples (DMCS), ball grid array (BGA) packaged circuits, LINUX running on embedded field programmable gate arrays (FPGAs) with hard core microprocessors for the prototype currently being developed.

  8. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  9. Repopulation of calibrations with samples from the target site: effect of the size of the calibration.

    NASA Astrophysics Data System (ADS)

    Guerrero, C.; Zornoza, R.; Gómez, I.; Mataix-Solera, J.; Navarro-Pedreño, J.; Mataix-Beneyto, J.; García-Orenes, F.

    2009-04-01

    Near infrared (NIR) reflectance spectroscopy offers important advantages because is a non-destructive technique, the pre-treatments needed in samples are minimal, and the spectrum of the sample is obtained in less than 1 minute without the needs of chemical reagents. For these reasons, NIR is a fast and cost-effective method. Moreover, NIR allows the analysis of several constituents or parameters simultaneously from the same spectrum once it is obtained. For this, a needed steep is the development of soil spectral libraries (set of samples analysed and scanned) and calibrations (using multivariate techniques). The calibrations should contain the variability of the target site soils in which the calibration is to be used. Many times this premise is not easy to fulfil, especially in libraries recently developed. A classical way to solve this problem is through the repopulation of libraries and the subsequent recalibration of the models. In this work we studied the changes in the accuracy of the predictions as a consequence of the successive addition of samples to repopulation. In general, calibrations with high number of samples and high diversity are desired. But we hypothesized that calibrations with lower quantities of samples (lower size) will absorb more easily the spectral characteristics of the target site. Thus, we suspect that the size of the calibration (model) that will be repopulated could be important. For this reason we also studied this effect in the accuracy of predictions of the repopulated models. In this study we used those spectra of our library which contained data of soil Kjeldahl Nitrogen (NKj) content (near to 1500 samples). First, those spectra from the target site were removed from the spectral library. Then, different quantities of samples of the library were selected (representing the 5, 10, 25, 50, 75 and 100% of the total library). These samples were used to develop calibrations with different sizes (%) of samples. We used partial least squares regression, and leave-one-out cross validation as methods of calibration. Two methods were used to select the different quantities (size of models) of samples: (1) Based on Characteristics of Spectra (BCS), and (2) Based on NKj Values of Samples (BVS). Both methods tried to select representative samples. Each of the calibrations (containing the 5, 10, 25, 50, 75 or 100% of the total samples of the library) was repopulated with samples from the target site and then recalibrated (by leave-one-out cross validation). This procedure was sequential. In each step, 2 samples from the target site were added to the models, and then recalibrated. This process was repeated successively 10 times, being 20 the total number of samples added. A local model was also created with the 20 samples used for repopulation. The repopulated, non-repopulated and local calibrations were used to predict the NKj content in those samples from the target site not included in repopulations. For the measurement of the accuracy of the predictions, the r2, RMSEP and slopes were calculated comparing predicted with analysed NKj values. This scheme was repeated for each of the four target sites studied. In general, scarce differences can be found between results obtained with BCS and BVS models. We observed that the repopulation of models increased the r2 of the predictions in sites 1 and 3. The repopulation caused scarce changes of the r2 of the predictions in sites 2 and 4, maybe due to the high initial values (using non-repopulated models r2 >0.90). As consequence of repopulation, the RMSEP decreased in all the sites except in site 2, where a very low RMESP was obtained before the repopulation (0.4 g×kg-1). The slopes trended to approximate to 1, but this value was reached only in site 4 and after the repopulation with 20 samples. In sites 3 and 4, accurate predictions were obtained using the local models. Predictions obtained with models using similar size of samples (similar %) were averaged with the aim to describe the main patterns. The r2 of predictions obtained with models of higher size were not more accurate than those obtained with models of lower size. After repopulation, the RMSEP of predictions using models with lower sizes (5, 10 and 25% of samples of the library) were lower than RMSEP obtained with higher sizes (75 and 100%), indicating that small models can easily integrate the variability of the soils from the target site. The results suggest that calibrations of small size could be repopulated and "converted" in local calibrations. According to this, we can focus most of the efforts in the obtainment of highly accurate analytical values in a reduced set of samples (including some samples from the target sites). The patterns observed here are in opposition with the idea of global models. These results could encourage the expansion of this technique, because very large data based seems not to be needed. Future studies with very different samples will help to confirm the robustness of the patterns observed. Authors acknowledge to "Bancaja-UMH" for the financial support of the project "NIRPROS".

  10. Bayesian probabilistic population projections for all countries.

    PubMed

    Raftery, Adrian E; Li, Nan; Ševčíková, Hana; Gerland, Patrick; Heilig, Gerhard K

    2012-08-28

    Projections of countries' future populations, broken down by age and sex, are widely used for planning and research. They are mostly done deterministically, but there is a widespread need for probabilistic projections. We propose a bayesian method for probabilistic population projections for all countries. The total fertility rate and female and male life expectancies at birth are projected probabilistically using bayesian hierarchical models estimated via Markov chain Monte Carlo using United Nations population data for all countries. These are then converted to age-specific rates and combined with a cohort component projection model. This yields probabilistic projections of any population quantity of interest. The method is illustrated for five countries of different demographic stages, continents and sizes. The method is validated by an out of sample experiment in which data from 1950-1990 are used for estimation, and applied to predict 1990-2010. The method appears reasonably accurate and well calibrated for this period. The results suggest that the current United Nations high and low variants greatly underestimate uncertainty about the number of oldest old from about 2050 and that they underestimate uncertainty for high fertility countries and overstate uncertainty for countries that have completed the demographic transition and whose fertility has started to recover towards replacement level, mostly in Europe. The results also indicate that the potential support ratio (persons aged 20-64 per person aged 65+) will almost certainly decline dramatically in most countries over the coming decades.

  11. U.S. Balance-of-Station Cost Drivers and Sensitivities (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maples, B.

    2012-10-01

    With balance-of-system (BOS) costs contributing up to 70% of the installed capital cost, it is fundamental to understanding the BOS costs for offshore wind projects as well as potential cost trends for larger offshore turbines. NREL developed a BOS model using project cost estimates developed by GL Garrad Hassan. Aspects of BOS covered include engineering and permitting, ports and staging, transportation and installation, vessels, foundations, and electrical. The data introduce new scaling relationships for each BOS component to estimate cost as a function of turbine parameters and size, project parameters and size, and soil type. Based on the new BOSmore » model, an analysis to understand the non‐turbine costs has been conducted. This analysis establishes a more robust baseline cost estimate, identifies the largest cost components of offshore wind project BOS, and explores the sensitivity of the levelized cost of energy to permutations in each BOS cost element. This presentation shows results from the model that illustrates the potential impact of turbine size and project size on the cost of energy from U.S. offshore wind plants.« less

  12. Non-parametric estimation of population size changes from the site frequency spectrum.

    PubMed

    Waltoft, Berit Lindum; Hobolth, Asger

    2018-06-11

    Changes in population size is a useful quantity for understanding the evolutionary history of a species. Genetic variation within a species can be summarized by the site frequency spectrum (SFS). For a sample of size n, the SFS is a vector of length n - 1 where entry i is the number of sites where the mutant base appears i times and the ancestral base appears n - i times. We present a new method, CubSFS, for estimating the changes in population size of a panmictic population from an observed SFS. First, we provide a straightforward proof for the expression of the expected site frequency spectrum depending only on the population size. Our derivation is based on an eigenvalue decomposition of the instantaneous coalescent rate matrix. Second, we solve the inverse problem of determining the changes in population size from an observed SFS. Our solution is based on a cubic spline for the population size. The cubic spline is determined by minimizing the weighted average of two terms, namely (i) the goodness of fit to the observed SFS, and (ii) a penalty term based on the smoothness of the changes. The weight is determined by cross-validation. The new method is validated on simulated demographic histories and applied on unfolded and folded SFS from 26 different human populations from the 1000 Genomes Project.

  13. Inorganic/organic hybrid nanocomposite coating applications: Formulation, characterization, and evaluation

    NASA Astrophysics Data System (ADS)

    Eyassu, Tsehaye

    Nanotechnology applications in coatings have shown significant growth in recent years. Systematic incorporation of nano-sized inorganic materials into polymer coating enhances optical, electrical, thermal and mechanical properties significantly. The present dissertation will focus on formulation, characterization and evaluation of inorganic/organic hybrid nanocomposite coatings for heat dissipation, corrosion inhibition and ultraviolet (UV) and near infrared (NIR) cut applications. In addition, the dissertation will cover synthesis, characterization and dispersion of functional inorganic fillers. In the first project, we investigated factors that can affect the "Molecular Fan" cooling performance and efficiency. The investigated factors and conditions include types of nanomaterials, size, loading amount, coating thickness, heat sink substrate, substrate surface modification, and power input. Using the optimal factors, MF coating was formulated and applied on commercial HDUs, and cooling efficiencies up to 22% and 23% were achieved using multi-walled carbon nanotube and graphene fillers. The result suggests that molecular fan action can reduce the size and mass of heat-sink module and thus offer a low cost of LED light unit. In the second project, we report the use of thin organic/inorganic hybrid coating as a protection for corrosion and as a thermal management to dissipate heat from galvanized steel. Here, we employed the in-situ phosphatization method for corrosion inhibition and "Molecular fan" technique to dissipate heat from galvanized steel panels and sheets. Salt fog tests reveal successful completion of 72 hours corrosion protection time frame for samples coated with as low as ~0.7microm thickness. Heat dissipation measurement shows 9% and 13% temperature cooling for GI and GL panels with the same coating thickness of ~0.7microm respectively. The effect of different factors, in-situ phosphatization reagent (ISPR), cross-linkers and nanomaterial on corrosion and heat dissipation was discussed on this project. In the third project, optically transparent UV and NIR light cut coating for solar control application was studied. On separate study for UV cut coatings, we have formulated UV-shielding coatings using ZnO nanoparticles fillers that have more than 90% UV absorption and above 90% visible transparency. In a separate part of the same project, we synthesized NIR-absorbing CsxWO 3 nanorods with uniform particle size distribution in 2 hours using a solvothermal method. Aqueous dispersion of the nanorods has showed high transparency (80-90%) in the visible range with strong NIR light shielding (80-90%). Preliminary work on sol-gel coatings of CsxWO3 showed high visible light transparency with excellent NIR shielding.

  14. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  15. Cosmic voids and void lensing in the Dark Energy Survey science verification data

    DOE PAGES

    Sánchez, C.; Clampitt, J.; Kovacs, A.; ...

    2016-10-26

    Galaxies and their dark matter halos populate a complicated filamentary network around large, nearly empty regions known as cosmic voids. Cosmic voids are usually identified in spectroscopic galaxy surveys, where 3D information about the large-scale structure of the Universe is available. Although an increasing amount of photometric data is being produced, its potential for void studies is limited since photometric redshifts induce line-of-sight position errors of ~50 Mpc/h or more that can render many voids undetectable. In this paper we present a new void finder designed for photometric surveys, validate it using simulations, and apply it to the high-quality photo-zmore » redMaGiC galaxy sample of the Dark Energy Survey Science Verification (DES-SV) data. The algorithm works by projecting galaxies into 2D slices and finding voids in the smoothed 2D galaxy density field of the slice. Fixing the line-of-sight size of the slices to be at least twice the photo- z scatter, the number of voids found in these projected slices of simulated spectroscopic and photometric galaxy catalogs is within 20% for all transverse void sizes, and indistinguishable for the largest voids of radius ~70 Mpc/h and larger. The positions, radii, and projected galaxy profiles of photometric voids also accurately match the spectroscopic void sample. Applying the algorithm to the DES-SV data in the redshift range 0.2 < z < 0.8 , we identify 87 voids with comoving radii spanning the range 18-120 Mpc/h, and carry out a stacked weak lensing measurement. With a significance of 4.4σ, the lensing measurement confirms the voids are truly underdense in the matter field and hence not a product of Poisson noise, tracer density effects or systematics in the data. In conclusion, it also demonstrates, for the first time in real data, the viability of void lensing studies in photometric surveys.« less

  16. Determining Late Pleistocene to Early Holocene deglaciation of the Baltic Ice Lake through sedimentological core sample analysis of IODP Site M0064

    NASA Astrophysics Data System (ADS)

    Kelly, A. L.; Passchier, S.

    2016-12-01

    This study investigates the deglaciation history of the Scandinavian Ice Sheet (SIS) within the Baltic Sea's Hanö Bay from the Late Pleistocene to the Holocene using samples from International Ocean Discovery Program (IODP) Site M0064. The research aims to understand how the speed of deglaciation influences Baltic Ice Lake (BIL) drainage patterns and relative sea level changes on a high-resolution timescale. Glacial history of the SIS has been studied through glacial till analysis, surface exposure dating, and modeling, encompassing its most recent deglaciation 20-14ka BP, and suggests ice retreated from the project site 16.7ka BP. Between 17 and 14ka BP global sea level rose 4 meters per century, accompanied by a dramatic increase in atmospheric carbon. This period of rapid sea level rise and global warming is a valuable analog for understanding the Earth's current and projected climate. This project uses particle size analysis to better understand the late-glacial depositional environment in Hanö Bay, and ICP-OES geochemical analysis for evidence pertaining to changing sediment provenance and bottom water oxygenation in the BIL. Diamicton is present between 47 and 9 mbsf in Hole M0064D. At 8 mbsf, the sediment exhibits a prominent upward transition from well-laminated cm-scale grey to more thinly laminated reddish brown rhythmites. With calculated Al/Ti ratios, we find that there is not much provenance change in the sequence, however we see fluctuations in Mn/Al ratios, implying shifts in sediment color may be chemical, possibly indicating redox changes in the water column during sediment deposition. Although we find that particle size in the varve sequence does not change, this factor may be driving chemical fluctuations in the diamicton. These results increase the understanding of ice retreat, paleocirculation and relative sea level changes in the Baltic Sea at the onset of the last deglaciation.

  17. Two takes on the ecosystem impacts of climate change and fishing: Comparing a size-based and a species-based ecosystem model in the central North Pacific

    NASA Astrophysics Data System (ADS)

    Woodworth-Jefcoats, Phoebe A.; Polovina, Jeffrey J.; Howell, Evan A.; Blanchard, Julia L.

    2015-11-01

    We compare two ecosystem model projections of 21st century climate change and fishing impacts in the central North Pacific. Both a species-based and a size-based ecosystem modeling approach are examined. While both models project a decline in biomass across all sizes in response to climate change and a decline in large fish biomass in response to increased fishing mortality, the models vary significantly in their handling of climate and fishing scenarios. For example, based on the same climate forcing the species-based model projects a 15% decline in catch by the end of the century while the size-based model projects a 30% decline. Disparities in the models' output highlight the limitations of each approach by showing the influence model structure can have on model output. The aspects of bottom-up change to which each model is most sensitive appear linked to model structure, as does the propagation of interannual variability through the food web and the relative impact of combined top-down and bottom-up change. Incorporating integrated size- and species-based ecosystem modeling approaches into future ensemble studies may help separate the influence of model structure from robust projections of ecosystem change.

  18. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  19. Research, Recruitment, and Assessment Strategies From The Dune Undergraduate Geomorphology and Geochronology (DUGG) REU Site at the University of Wisconsin-Platteville

    NASA Astrophysics Data System (ADS)

    Rawling, J.; Presiado, R. S.; Hanson, P. R.

    2013-12-01

    The goals of the DUGG REU project included providing students with 1) significant field and laboratory training in geomorphology and geochronology, 2) an opportunity to participate in a project of regional significance to geomorphologists and Quaternary scientists and 3) cohort building opportunities resulting in relationships that will serve them throughout their graduate and/or professional STEM careers. Each cohort was provided with three opportunities to visit their chosen study sites and collect data. Students were introduced to their sites with geophysical surveying by conducting ground-penetrating radar transects. During the second and third field excursions students collected subsurface sediment samples with either a bucket auger or a portable vibracoring device. Student generated data from previous trips, including preliminary OSL data before the third trip, better informed subsequent sampling strategies. In total, the students measured the particle-size distributions from ~950 samples taken from 160 sites and dated 65 sand samples using optically-stimulated luminescence (OSL) dating. Efforts made to ensure a diverse applicant pool included the standard NSF and university websites, targeted emails, targeted recruitment at conferences, university visits, and collaborations with other undergraduate research centers. In total, approximately 25% of the participating DUGG students were members of minority groups underrepresented in the sciences (n=5), 65% were women (n=14) and one was a veteran of the Iraq conflict. The DUGG project included a Council on Undergraduate Research review during year one of the program to have external input on the project, and an aggressive internal assessment protocol that evaluated five measures related to the impact the project was having on the students. Over the three years of the project, the multiple annual program assessments were able to document increases in participants' technology literacy, perception toward geosciences, research techniques, and oral presentation skills from the beginning to the end of each DUGG cycle. The DUGG program also yielded effective gains in the student's geoscience content knowledge as measured by the assessment instruments. It is clear from the project assessments that the three years of DUGG had significant successes, and was a direct result of the careful consideration of each year's experience and evaluations. The combination of multiple visits to the research sites, rapid data turn around, diversity recruitment, and rigorous assessment ensured the successful achievement of the program goals and resulted in exceptional experiences for the DUGG students.

  20. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael Keane; Xiao-Chun Shi; Tong-man Ong

    The project staff partnered with Costas Sioutas from the University of Southern California to apply the VACES (Versatile Aerosol Concentration Enhancement System) to a diesel engine test facility at West Virginia University Department of Mechanical Engineering and later the NIOSH Lake Lynn Mine facility. The VACES system was able to allow diesel exhaust particulate matter (DPM) to grow to sufficient particle size to be efficiently collected with the SKC Biosampler impinger device, directly into a suspension of simulated pulmonary surfactant. At the WVU-MAE facility, the concentration of the aerosol was too high to allow efficient use of the VACES concentrationmore » enhancement, although aerosol collection was successful. Collection at the LLL was excellent with the diluted exhaust stream. In excess of 50 samples were collected at the LLL facility, along with matching filter samples, at multiple engine speed and load conditions. Replicate samples were combined and concentration increased using a centrifugal concentrator. Bioassays were negative for all tested samples, but this is believed to be due to insufficient concentration in the final assay suspensions.« less

  2. Sampling design for the Study of Cardiovascular Risks in Adolescents (ERICA).

    PubMed

    Vasconcellos, Mauricio Teixeira Leite de; Silva, Pedro Luis do Nascimento; Szklo, Moyses; Kuschnir, Maria Cristina Caetano; Klein, Carlos Henrique; Abreu, Gabriela de Azevedo; Barufaldi, Laura Augusta; Bloch, Katia Vergetti

    2015-05-01

    The Study of Cardiovascular Risk in Adolescents (ERICA) aims to estimate the prevalence of cardiovascular risk factors and metabolic syndrome in adolescents (12-17 years) enrolled in public and private schools of the 273 municipalities with over 100,000 inhabitants in Brazil. The study population was stratified into 32 geographical strata (27 capitals and five sets with other municipalities in each macro-region of the country) and a sample of 1,251 schools was selected with probability proportional to size. In each school three combinations of shift (morning and afternoon) and grade were selected, and within each of these combinations, one class was selected. All eligible students in the selected classes were included in the study. The design sampling weights were calculated by the product of the reciprocals of the inclusion probabilities in each sampling stage, and were later calibrated considering the projections of the numbers of adolescents enrolled in schools located in the geographical strata by sex and age.

  3. Attitudes about OCLC in Small and Medium-Sized Libraries. Illinois Valley Library System OCLC Experimental Project. Report No. 4.

    ERIC Educational Resources Information Center

    Bills, Linda G.; Wilford, Valerie

    A project was conducted from 1980 to 1982 to determine the costs and benefits of OCLC use in 29 small and medium-sized member libraries of the Illinois Valley Library System (IVLS). Academic, school, public, and special libraries participated in the project. Based on written attitude surveys of and interviews with library directors, staff,…

  4. The relevance of grain dissection for grain size reduction in polar ice: insights from numerical models and ice core microstructure analysis

    NASA Astrophysics Data System (ADS)

    Steinbach, Florian; Kuiper, Ernst-Jan N.; Eichler, Jan; Bons, Paul D.; Drury, Martyn R.; Griera, Albert; Pennock, Gill M.; Weikusat, Ilka

    2017-09-01

    The flow of ice depends on the properties of the aggregate of individual ice crystals, such as grain size or lattice orientation distributions. Therefore, an understanding of the processes controlling ice micro-dynamics is needed to ultimately develop a physically based macroscopic ice flow law. We investigated the relevance of the process of grain dissection as a grain-size-modifying process in natural ice. For that purpose, we performed numerical multi-process microstructure modelling and analysed microstructure and crystallographic orientation maps from natural deep ice-core samples from the North Greenland Eemian Ice Drilling (NEEM) project. Full crystallographic orientations measured by electron backscatter diffraction (EBSD) have been used together with c-axis orientations using an optical technique (Fabric Analyser). Grain dissection is a feature of strain-induced grain boundary migration. During grain dissection, grain boundaries bulge into a neighbouring grain in an area of high dislocation energy and merge with the opposite grain boundary. This splits the high dislocation-energy grain into two parts, effectively decreasing the local grain size. Currently, grain size reduction in ice is thought to be achieved by either the progressive transformation from dislocation walls into new high-angle grain boundaries, called subgrain rotation or polygonisation, or bulging nucleation that is assisted by subgrain rotation. Both our time-resolved numerical modelling and NEEM ice core samples show that grain dissection is a common mechanism during ice deformation and can provide an efficient process to reduce grain sizes and counter-act dynamic grain-growth in addition to polygonisation or bulging nucleation. Thus, our results show that solely strain-induced boundary migration, in absence of subgrain rotation, can reduce grain sizes in polar ice, in particular if strain energy gradients are high. We describe the microstructural characteristics that can be used to identify grain dissection in natural microstructures.

  5. Transition from Forward Smoldering to Flaming in Small Polyurethane Foam Samples

    NASA Technical Reports Server (NTRS)

    Bar-Ilan, A.; Putzeys, O.; Rein, G.; Fernandez-Pello, A. C.

    2004-01-01

    Experimental observations are presented of the effect of the flow velocity and oxygen concentration, and of a thermal radiant flux, on the transition from smoldering to flaming in forward smoldering of small samples of polyurethane foam with a gas/solid interface. The experiments are part of a project studying the transition from smolder to flaming under conditions encountered in spacecraft facilities, i.e., microgravity, low velocity variable oxygen concentration flows. Because the microgravity experiments are planned for the International Space Station, the foam samples had to be limited in size for safety and launch mass reasons. The feasible sample size is too small for smolder to self propagate because of heat losses to the surrounding environment. Thus, the smolder propagation and the transition to flaming had to be assisted by reducing the heat losses to the surroundings and increasing the oxygen concentration. The experiments are conducted with small parallelepiped samples vertically placed in a wind tunnel. Three of the sample lateral-sides are maintained at elevated temperature and the fourth side is exposed to an upward flow and to a radiant flux. It is found that decreasing the flow velocity and increasing its oxygen concentration, and/or increasing the radiant flux enhances the transition to flaming, and reduces the delay time to transition. Limiting external ambient conditions for the transition to flaming are reported for the present experimental set-up. The results show that smolder propagation and the transition to flaming can occur in relatively small fuel samples if the external conditions are appropriate. The results also indicate that transition to flaming occurs in the char left behind by the smolder reaction, and it has the characteristics of a gas-phase ignition induced by the smolder reaction, which acts as the source of both gaseous fuel and heat.

  6. Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.

    PubMed

    McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M

    2015-03-01

    Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.

  7. In-Space Manufacturing Baseline Property Development

    NASA Technical Reports Server (NTRS)

    Stockman, Tom; Schneider, Judith; Prater, Tracie; Bean, Quincy; Werkheiser, Nicki

    2016-01-01

    The In-Space Manufacturing (ISM) project at NASA Marshall Space Flight Center currently operates a 3D FDM (fused deposition modeling) printer onboard the International Space Station. In order to enable utilization of this capability by designer, the project needs to establish characteristic material properties for materials produced using the process. This is difficult for additive manufacturing since standards and specifications do not yet exist for these technologies. Due to availability of crew time, there are limitations to the sample size which in turn limits the application of the traditional design allowables approaches to develop a materials property database for designers. In this study, various approaches to development of material databases were evaluated for use by designers of space systems who wish to leverage in-space manufacturing capabilities. This study focuses on alternative statistical techniques for baseline property development to support in-space manufacturing.

  8. Particle flow oriented electromagnetic calorimeter optimization for the circular electron positron collider

    NASA Astrophysics Data System (ADS)

    Zhao, H.; Fu, C.; Yu, D.; Wang, Z.; Hu, T.; Ruan, M.

    2018-03-01

    The design and optimization of the Electromagnetic Calorimeter (ECAL) are crucial for the Circular Electron Positron Collider (CEPC) project, a proposed future Higgs/Z factory. Following the reference design of the International Large Detector (ILD), a set of silicon-tungsten sampling ECAL geometries are implemented into the Geant4 simulation, whose performance is then scanned using Arbor algorithm. The photon energy response at different ECAL longitudinal structures is analyzed, and the separation performance between nearby photon showers with different ECAL transverse cell sizes is investigated and parametrized. The overall performance is characterized by a set of physics benchmarks, including νν H events where Higgs boson decays into a pair of photons (EM objects) or gluons (jets) and Z→τ+τ- events. Based on these results, we propose an optimized ECAL geometry for the CEPC project.

  9. Development of modern human subadult age and sex estimation standards using multi-slice computed tomography images from medical examiner's offices

    NASA Astrophysics Data System (ADS)

    Stock, Michala K.; Stull, Kyra E.; Garvin, Heather M.; Klales, Alexandra R.

    2016-10-01

    Forensic anthropologists are routinely asked to estimate a biological profile (i.e., age, sex, ancestry and stature) from a set of unidentified remains. In contrast to the abundance of collections and techniques associated with adult skeletons, there is a paucity of modern, documented subadult skeletal material, which limits the creation and validation of appropriate forensic standards. Many are forced to use antiquated methods derived from small sample sizes, which given documented secular changes in the growth and development of children, are not appropriate for application in the medico-legal setting. Therefore, the aim of this project is to use multi-slice computed tomography (MSCT) data from a large, diverse sample of modern subadults to develop new methods to estimate subadult age and sex for practical forensic applications. The research sample will consist of over 1,500 full-body MSCT scans of modern subadult individuals (aged birth to 20 years) obtained from two U.S. medical examiner's offices. Statistical analysis of epiphyseal union scores, long bone osteometrics, and os coxae landmark data will be used to develop modern subadult age and sex estimation standards. This project will result in a database of information gathered from the MSCT scans, as well as the creation of modern, statistically rigorous standards for skeletal age and sex estimation in subadults. Furthermore, the research and methods developed in this project will be applicable to dry bone specimens, MSCT scans, and radiographic images, thus providing both tools and continued access to data for forensic practitioners in a variety of settings.

  10. Citizen-Scientist Led Quartz Vein Investigation in the McDowell Sonoran Preserve, Scottsdale, Arizona, Resulting in Significant Geologic Discoveries and a Peer-Reviewed Report Coauthored and with Maps by Citizen-Scientists.

    NASA Astrophysics Data System (ADS)

    Gruber, D.; Gootee, B.

    2016-12-01

    Citizen-scientists of the McDowell Sonoran Conservancy Field Institute originated and led this project to study milky quartz deposits. Milky quartz veins of all sizes are visible throughout the McDowell Sonoran Preserve (Scottsdale, Arizona) and are commonly found in Arizona Proterozoic rocks. No research on milky quartz has been done locally and little is known about its formation and emplacement history. Working with Brian Gootee, research geologist with the Arizona Geological Survey (AZGS), a citizen science team identified candidate study sites with large quartz veins and then conducted aerial balloon photography followed by geologic mapping, basic data collection, photo-documentation, and sampling from two sites. Samples were analyzed with a UV lamp, Geiger counter, and x-ray fluorescence spectrometer. Petroscopic analysis and interpretation of the samples were done by Gootee. Daniel Gruber, the citizen-science project leader, and Gootee summarized methodology, sample analyses, and interpretation in a report including detailed geologic maps. Analysis of samples from one site provided evidence of several events of Proterozoic quartz formation. The other site hosted pegmatite, cumulates, graphic granite and orbicular granite in association with milky quartz, all discovered by citizen scientists. The milky quartz and surrounding pegmatites in granite at this site trace the progression of late-stage crystallization at the margin of a fractionated granite batholith, providing an exemplary opportunity for further research into batholith geochemistry and evolution. The project required 1000 hours of citizen-science time for training, field work, data organization and entry, mapping, and writing. The report by Gootee and Gruber was reviewed and published by AZGS as an Open File Report in its online document repository. The citizen scientist team leveraged the time of professional geologists to expand knowledge of an important geologic feature of the McDowell Mountains.

  11. Decompressive Surgery for the Treatment of Malignant Infarction of the Middle Cerebral Artery (DESTINY): a randomized, controlled trial.

    PubMed

    Jüttler, Eric; Schwab, Stefan; Schmiedek, Peter; Unterberg, Andreas; Hennerici, Michael; Woitzik, Johannes; Witte, Steffen; Jenetzky, Ekkehart; Hacke, Werner

    2007-09-01

    Decompressive surgery (hemicraniectomy) for life-threatening massive cerebral infarction represents a controversial issue in neurocritical care medicine. We report here the 30-day mortality and 6- and 12-month functional outcomes from the DESTINY trial. DESTINY (ISRCTN01258591) is a prospective, multicenter, randomized, controlled, clinical trial based on a sequential design that used mortality after 30 days as the first end point. When this end point was reached, patient enrollment was interrupted as per protocol until recalculation of the projected sample size was performed on the basis of the 6-month outcome (primary end point=modified Rankin Scale score, dichotomized to 0 to 3 versus 4 to 6). All analyses were based on intention to treat. A statistically significant reduction in mortality was reached after 32 patients had been included: 15 of 17 (88%) patients randomized to hemicraniectomy versus 7 of 15 (47%) patients randomized to conservative therapy survived after 30 days (P=0.02). After 6 and 12 months, 47% of patients in the surgical arm versus 27% of patients in the conservative treatment arm had a modified Rankin Scale score of 0 to 3 (P=0.23). DESTINY showed that hemicraniectomy reduces mortality in large hemispheric stroke. With 32 patients included, the primary end point failed to demonstrate statistical superiority of hemicraniectomy, and the projected sample size was calculated to 188 patients. Despite this failure to meet the primary end point, the steering committee decided to terminate the trial in light of the results of the joint analysis of the 3 European hemicraniectomy trials.

  12. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Creating Effective Type for the Classroom.

    ERIC Educational Resources Information Center

    Fitzsimons, Dennis

    1989-01-01

    Defines basic typographic terminology and offers two classroom projects using microcomputers to create and use type. Discusses typeface, type families, type style, type size, and type font. Examples of student projects that include the creation of bulletin board displays and page-size maps. (KO)

  14. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  15. Sample size calculations for randomized clinical trials published in anesthesiology journals: a comparison of 2010 versus 2016.

    PubMed

    Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M

    2018-06-01

    Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.

  16. Predicting long-term graft survival in adult kidney transplant recipients.

    PubMed

    Pinsky, Brett W; Lentine, Krista L; Ercole, Patrick R; Salvalaggio, Paolo R; Burroughs, Thomas E; Schnitzler, Mark A

    2012-07-01

    The ability to accurately predict a population's long-term survival has important implications for quantifying the benefits of transplantation. To identify a model that can accurately predict a kidney transplant population's long-term graft survival, we retrospectively studied the United Network of Organ Sharing data from 13,111 kidney-only transplants completed in 1988- 1989. Nineteen-year death-censored graft survival (DCGS) projections were calculated and compared with the population's actual graft survival. The projection curves were created using a two-part estimation model that (1) fits a Kaplan-Meier survival curve immediately after transplant (Part A) and (2) uses truncated observational data to model a survival function for long-term projection (Part B). Projection curves were examined using varying amounts of time to fit both parts of the model. The accuracy of the projection curve was determined by examining whether predicted survival fell within the 95% confidence interval for the 19-year Kaplan-Meier survival, and the sample size needed to detect the difference in projected versus observed survival in a clinical trial. The 19-year DCGS was 40.7% (39.8-41.6%). Excellent predictability (41.3%) can be achieved when Part A is fit for three years and Part B is projected using two additional years of data. Using less than five total years of data tended to overestimate the population's long-term survival, accurate prediction of long-term DCGS is possible, but requires attention to the quantity data used in the projection method.

  17. Characteristics of Successful Internal Medicine Resident Research Projects: Predictors of Journal Publication Versus Abstract Presentation.

    PubMed

    Atreya, Auras R; Stefan, Mihaela; Friderici, Jennifer L; Kleppel, Reva; Fitzgerald, Janice; Rothberg, Michael B

    2018-02-06

    To identify the characteristics of successful research projects at an internal medicine residency program with an established research curriculum. The authors collected data about all research projects initiated by or involving medicine residents from 2006 to 2013 at Baystate Medical Center, using departmental files and institutional review board applications. Resident and mentor characteristics were determined using personnel files and Medline searches. Using multivariable models, the authors identified predictors of successful completion of projects using adjusted prevalence ratios (PRs). The primary outcome was manuscript publication by resident and secondary outcome was either publication or regional/national presentation. Finally, residents were surveyed to identify barriers and/or factors contributing to project completion. Ninety-four research projects were identified: 52 (55.3%) projects achieved the primary outcome and 72 (76.5%) met the secondary outcome, with overlap between categories. Most study designs were cross-sectional (41, 43.6%) or retrospective cohort (30, 31.9%). After adjustment, utilization of the epidemiology/biostatistical core (PR = 2.09; 95% CI: 1.36, 3.21), established publication record of resident (PR = 1.54, 95% CI: 1.14, 2.07), and resident with U.S. medical education (PR = 1.39, 95% CI: 1.02, 1.90) were associated with successful completion of projects. Mentor publication record (PR = 3.13) did not retain significance due to small sample size. Most respondents (65%) cited "lack of time" as a major project barrier. Programs seeking to increase resident publications should consider an institutional epidemiology/biostatistical core available to all residency research projects, and residents should choose experienced mentors with a track record of publications.

  18. A generalized public goods game with coupling of individual ability and project benefit

    NASA Astrophysics Data System (ADS)

    Zhong, Li-Xin; Xu, Wen-Juan; He, Yun-Xin; Zhong, Chen-Yang; Chen, Rong-Da; Qiu, Tian; Shi, Yong-Dong; Ren, Fei

    2017-08-01

    Facing a heavy task, any single person can only make a limited contribution and team cooperation is needed. As one enjoys the benefit of the public goods, the potential benefits of the project are not always maximized and may be partly wasted. By incorporating individual ability and project benefit into the original public goods game, we study the coupling effect of the four parameters, the upper limit of individual contribution, the upper limit of individual benefit, the needed project cost and the upper limit of project benefit on the evolution of cooperation. Coevolving with the individual-level group size preferences, an increase in the upper limit of individual benefit promotes cooperation while an increase in the upper limit of individual contribution inhibits cooperation. The coupling of the upper limit of individual contribution and the needed project cost determines the critical point of the upper limit of project benefit, where the equilibrium frequency of cooperators reaches its highest level. Above the critical point, an increase in the upper limit of project benefit inhibits cooperation. The evolution of cooperation is closely related to the preferred group-size distribution. A functional relation between the frequency of cooperators and the dominant group size is found.

  19. Redshift differences of galaxies in nearby groups

    NASA Technical Reports Server (NTRS)

    Harrison, E. R.

    1975-01-01

    It is reported that galaxies in nearby groups exhibit anomalous nonvelocity redshifts. In this discussion, (1) four classes of nearby groups of galacies are analyzed, and no significant nonvelocity redshift effect is found; and (2) it is pointed out that transverse velocities (i.e., velocities transverse to the line of sight of the main galaxy, or center of mass) contribute components to the redshift measurements of companion galaxies. The redshifts of galaxies in nearby groups of appreciable angular size are considerably affected by these velocity projection effects. The transverse velocity contributions average out in rich, isotropic groups, and also in large samples of irregular groups of low membership, as in the four classes referred to in (1), but can introduce apparent discrepancies in small samples (as studied by Arp) of nearby groups of low membership.

  20. Data and methods to estimate fetal dose from fluoroscopically guided prophylactic hypogastric artery balloon occlusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomou, G.; Stratakis, J.; Perisinakis, K.

    Purpose: To provide data for estimation of fetal radiation dose (D{sub F}) from prophylactic hypogastric artery balloon occlusion (HABO) procedures. Methods: The Monte-Carlo-N-particle (MCNP) transport code and mathematical phantoms representing a pregnant patient at the ninth month of gestation were employed. PA, RAO 20° and LAO 20° fluoroscopy projections of left and right internal iliac arteries were simulated. Projection-specific normalized fetal dose (NFD) data were produced for various beam qualities. The effects of projection angle, x-ray field location relative to the fetus, field size, maternal body size, and fetal size on NFD were investigated. Presented NFD values were compared tomore » corresponding values derived using a physical anthropomorphic phantom simulating pregnancy at the third trimester and thermoluminescence dosimeters. Results: NFD did not considerably vary when projection angle was altered by ±5°, whereas it was found to markedly depend on tube voltage, filtration, x-ray field location and size, and maternal body size. Differences in NFD < 7.5% were observed for naturally expected variations in fetal size. A difference of less than 13.5% was observed between NFD values estimated by MCNP and direct measurements. Conclusions: Data and methods provided allow for reliable estimation of radiation burden to the fetus from HABO.« less

  1. THE HUNT FOR EXOMOONS WITH KEPLER (HEK). I. DESCRIPTION OF A NEW OBSERVATIONAL PROJECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kipping, D. M.; Bakos, G. A.; Buchhave, L.

    2012-05-10

    Two decades ago, empirical evidence concerning the existence and frequency of planets around stars, other than our own, was absent. Since that time, the detection of extrasolar planets from Jupiter-sized to, most recently, Earth-sized worlds has blossomed and we are finally able to shed light on the plurality of Earth-like, habitable planets in the cosmos. Extrasolar moons may also be frequently habitable worlds, but their detection or even systematic pursuit remains lacking in the current literature. Here, we present a description of the first systematic search for extrasolar moons as part of a new observational project called 'The Hunt formore » Exomoons with Kepler' (HEK). The HEK project distills the entire list of known transiting planet candidates found by Kepler (2326 at the time of writing) down to the most promising candidates for hosting a moon. Selected targets are fitted using a multimodal nested sampling algorithm coupled with a planet-with-moon light curve modeling routine. By comparing the Bayesian evidence of a planet-only model to that of a planet-with-moon, the detection process is handled in a Bayesian framework. In the case of null detections, upper limits derived from posteriors marginalized over the entire prior volume will be provided to inform the frequency of large moons around viable planetary hosts, {eta} leftmoon. After discussing our methodologies for target selection, modeling, fitting, and vetting, we provide two example analyses.« less

  2. M2K Planet Search: Spectroscopic Screening and Transit Photometry

    NASA Astrophysics Data System (ADS)

    Mann, Andrew; Gaidos, E.; Fischer, D.; Lepine, S.

    2010-10-01

    The M2K project is a search for planets orbiting nearby early M and late K dwarf drawn from the SUPERBLINK catalog. M and K dwarfs are highly attractive targets for finding low-mass and habitable planets because (1) close-in planets are more likely to orbit within their habitable zone, (2) planets orbiting them induce a larger Doppler signal and have deeper transits than similar planets around F, G, and early K type stars, (3) planet formation models predict they hold an abundance of super-Earth sized planets, and (4) they represent the vast majority of the stars close enough for direct imaging techniques. In spite of this, only 10% of late K and early M dwarfs are being monitored by current Doppler surveys. As part of the M2K project we have obtained low-resolution spectra for more than 2000 of our sample of 10,000 M and K dwarfs. We vet our sample by screening these stars for high metallicity and low chromospheric activity. We search for transits on targets showing high RMS Doppler signal and photometry candidates provided by SuperWASP project. By using "snapshot” photometry have been able to achieve sub-millimag photometry on numerous transit targets in the same night. With further follow-up observations we will be able to detect planets smaller than 10 Earth masses.

  3. Design of measuring system for wire diameter based on sub-pixel edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yudong; Zhou, Wang

    2016-09-01

    Light projection method is often used in measuring system for wire diameter, which is relatively simpler structure and lower cost, and the measuring accuracy is limited by the pixel size of CCD. Using a CCD with small pixel size can improve the measuring accuracy, but will increase the cost and difficulty of making. In this paper, through the comparative analysis of a variety of sub-pixel edge detection algorithms, polynomial fitting method is applied for data processing in measuring system for wire diameter, to improve the measuring accuracy and enhance the ability of anti-noise. In the design of system structure, light projection method with orthogonal structure is used for the detection optical part, which can effectively reduce the error caused by line jitter in the measuring process. For the electrical part, ARM Cortex-M4 microprocessor is used as the core of the circuit module, which can not only drive double channel linear CCD but also complete the sampling, processing and storage of the CCD video signal. In addition, ARM microprocessor can complete the high speed operation of the whole measuring system for wire diameter in the case of no additional chip. The experimental results show that sub-pixel edge detection algorithm based on polynomial fitting can make up for the lack of single pixel size and improve the precision of measuring system for wire diameter significantly, without increasing hardware complexity of the entire system.

  4. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  5. Comprehensive characterisation of atmospheric aerosols in Budapest, Hungary: physicochemical properties of inorganic species

    NASA Astrophysics Data System (ADS)

    Salma, Imre; Maenhaut, Willy; Zemplén-Papp, Éva; Záray, Gyula

    As part of an air pollution project in Budapest, aerosol samples were collected by stacked filter units and cascade impactors at an urban background site, two downtown sites, and within a road tunnel in field campaigns conducted in 1996, 1998 and 1999. Some criteria pollutants were also measured at one of the downtown sites. The aerosol samples were analysed by one or more of the following methods: instrumental neutron activation analysis, particle-induced X-ray emission analysis, a light reflection technique, gravimetry, thermal profiling carbon analysis and capillary electrophoresis. The quantities measured or derived include atmospheric concentrations of elements (from Na to U), of particulate matter, of black and elemental carbon, and total carbonaceous fraction, of some ionic species (e.g., nitrate and sulphate) in the fine ( <2 μm equivalent aerodynamic diameter, EAD) or in both coarse (10- 2 μm EAD) and fine size fractions, atmospheric concentrations of NO, NO 2, SO 2, CO and total suspended particulate matter, and meteorological parameters. The analytical results were used for characterisation of the concentration levels, elemental composition, time trends, enrichment of and relationships among the aerosol species in coarse and fine size fractions, for studying their fine-to-coarse concentration ratios, spatial and temporal variability, for determining detailed elemental mass size distributions, and for examining the extent of chemical mass closure.

  6. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. [The research protocol III. Study population].

    PubMed

    Arias-Gómez, Jesús; Villasís-Keever, Miguel Ángel; Miranda-Novales, María Guadalupe

    2016-01-01

    The study population is defined as a set of cases, determined, limited, and accessible, that will constitute the subjects for the selection of the sample, and must fulfill several characteristics and distinct criteria. The objectives of this manuscript are focused on specifying each one of the elements required to make the selection of the participants of a research project, during the elaboration of the protocol, including the concepts of study population, sample, selection criteria and sampling methods. After delineating the study population, the researcher must specify the criteria that each participant has to comply. The criteria that include the specific characteristics are denominated selection or eligibility criteria. These criteria are inclusion, exclusion and elimination, and will delineate the eligible population. The sampling methods are divided in two large groups: 1) probabilistic or random sampling and 2) non-probabilistic sampling. The difference lies in the employment of statistical methods to select the subjects. In every research, it is necessary to establish at the beginning the specific number of participants to be included to achieve the objectives of the study. This number is the sample size, and can be calculated or estimated with mathematical formulas and statistic software.

  8. Determining Plane-Sweep Sampling Points in Image Space Using the Cross-Ratio for Image-Based Depth Estimation

    NASA Astrophysics Data System (ADS)

    Ruf, B.; Erdnuess, B.; Weinmann, M.

    2017-08-01

    With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.

  9. Meta-analysis of studies assessing the efficacy of projective techniques in discriminating child sexual abuse.

    PubMed

    West, M M

    1998-11-01

    This meta-analysis of 12 studies assesses the efficacy of projective techniques to discriminate between sexually abused children and nonsexually abused children. A literature search was conducted to identify published studies that used projective instruments with sexually abused children. Those studies that reported statistics that allowed for an effect size to be calculated, were then included in the meta-analysis. There were 12 studies that fit the criteria. The projectives reviewed include The Rorschach, The Hand Test, The Thematic Apperception Test (TAT), the Kinetic Family Drawings, Human Figure Drawings, Draw Your Favorite Kind of Day, The Rosebush: A Visualization Strategy, and The House-Tree-Person. The results of this analysis gave an over-all effect size of d = .81, which is a large effect. Six studies included only a norm group of nondistressed, nonabused children with the sexual abuse group. The average effect size was d = .87, which is impressive. Six studies did include a clinical group of distressed nonsexually abused subjects and the effect size lowered to d = .76, which is a medium to large effect. This indicates that projective instruments can discriminate distressed children from nondistressed subjects, quite well. In the studies that included a clinical group of distressed children who were not sexually abused, the lower effect size indicates that the instruments were less able to discriminate the type of distress. This meta-analysis gives evidence that projective techniques have the ability to discriminate between children who have been sexually abused and those who were not abused sexually. However, further research that is designed to include clinical groups of distressed children is needed in order to determine how well the projectives can discriminate the type of distress.

  10. Tissue Sampling Guides for Porcine Biomedical Models.

    PubMed

    Albl, Barbara; Haesner, Serena; Braun-Reichhart, Christina; Streckel, Elisabeth; Renner, Simone; Seeliger, Frank; Wolf, Eckhard; Wanke, Rüdiger; Blutke, Andreas

    2016-04-01

    This article provides guidelines for organ and tissue sampling adapted to porcine animal models in translational medical research. Detailed protocols for the determination of sampling locations and numbers as well as recommendations on the orientation, size, and trimming direction of samples from ∼50 different porcine organs and tissues are provided in the Supplementary Material. The proposed sampling protocols include the generation of samples suitable for subsequent qualitative and quantitative analyses, including cryohistology, paraffin, and plastic histology; immunohistochemistry;in situhybridization; electron microscopy; and quantitative stereology as well as molecular analyses of DNA, RNA, proteins, metabolites, and electrolytes. With regard to the planned extent of sampling efforts, time, and personnel expenses, and dependent upon the scheduled analyses, different protocols are provided. These protocols are adjusted for (I) routine screenings, as used in general toxicity studies or in analyses of gene expression patterns or histopathological organ alterations, (II) advanced analyses of single organs/tissues, and (III) large-scale sampling procedures to be applied in biobank projects. Providing a robust reference for studies of porcine models, the described protocols will ensure the efficiency of sampling, the systematic recovery of high-quality samples representing the entire organ or tissue as well as the intra-/interstudy comparability and reproducibility of results. © The Author(s) 2016.

  11. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  12. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  13. Reconciling PM10 analyses by different sampling methods for Iron King Mine tailings dust.

    PubMed

    Li, Xu; Félix, Omar I; Gonzales, Patricia; Sáez, Avelino Eduardo; Ela, Wendell P

    2016-03-01

    The overall project objective at the Iron King Mine Superfund site is to determine the level and potential risk associated with heavy metal exposure of the proximate population emanating from the site's tailings pile. To provide sufficient size-fractioned dust for multi-discipline research studies, a dust generator was built and is now being used to generate size-fractioned dust samples for toxicity investigations using in vitro cell culture and animal exposure experiments as well as studies on geochemical characterization and bioassay solubilization with simulated lung and gastric fluid extractants. The objective of this study is to provide a robust method for source identification by comparing the tailing sample produced by dust generator and that collected by MOUDI sampler. As and Pb concentrations of the PM10 fraction in the MOUDI sample were much lower than in tailing samples produced by the dust generator, indicating a dilution of Iron King tailing dust by dust from other sources. For source apportionment purposes, single element concentration method was used based on the assumption that the PM10 fraction comes from a background source plus the Iron King tailing source. The method's conclusion that nearly all arsenic and lead in the PM10 dust fraction originated from the tailings substantiates our previous Pb and Sr isotope study conclusion. As and Pb showed a similar mass fraction from Iron King for all sites suggesting that As and Pb have the same major emission source. Further validation of this simple source apportionment method is needed based on other elements and sites.

  14. An improved affine projection algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo

    2017-08-01

    Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.

  15. Consultant-Client Relationship and Knowledge Transfer in Small- and Medium-Sized Enterprises Change Processes.

    PubMed

    Martinez, Luis F; Ferreira, Aristides I; Can, Amina B

    2016-04-01

    Based on Szulanski's knowledge transfer model, this study examined how the communicational, motivational, and sharing of understanding variables influenced knowledge transfer and change processes in small- and medium-sized enterprises, particularly under projects developed by funded programs. The sample comprised 144 entrepreneurs, mostly male (65.3%) and mostly ages 35 to 45 years (40.3%), who filled an online questionnaire measuring the variables of "sharing of understanding," "motivation," "communication encoding competencies," "source credibility," "knowledge transfer," and "organizational change." Data were collected between 2011 and 2012 and measured the relationship between clients and consultants working in a Portuguese small- and medium-sized enterprise-oriented action learning program. To test the hypotheses, structural equation modeling was conducted to identify the antecedents of sharing of understanding, motivational, and communicational variables, which were positively correlated with the knowledge transfer between consultants and clients. This transfer was also positively correlated with organizational change. Overall, the study provides important considerations for practitioners and academicians and establishes new avenues for future studies concerning the issues of consultant-client relationship and the efficacy of Government-funded programs designed to improve performance of small- and medium-sized enterprises. © The Author(s) 2016.

  16. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  17. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  18. Addressing Challenges in Studies of Behavioral Responses of Whales to Noise.

    PubMed

    Cato, Douglas H; Dunlop, Rebecca A; Noad, Michael J; McCauley, Robert D; Kniest, Eric; Paton, David; Kavanagh, Ailbhe S

    2016-01-01

    Studying the behavioral response of whales to noise presents numerous challenges. In addition to the characteristics of the noise exposure, many factors may affect the response and these must be measured and accounted for in the analysis. An adequate sample size that includes matching controls is crucial if meaningful results are to be obtained. Field work is thus complicated, logistically difficult, and expensive. This paper discusses some of the challenges and how they are being met in a large-scale multiplatform project in which humpback whales are exposed to the noise of seismic air guns.

  19. Clan Genomics and the Complex Architecture of Human Disease

    PubMed Central

    Belmont, John W.; Boerwinkle, Eric

    2013-01-01

    Human diseases are caused by alleles that encompass the full range of variant types, from single-nucleotide changes to copy-number variants, and these variations span a broad frequency spectrum, from the very rare to the common. The picture emerging from analysis of whole-genome sequences, the 1000 Genomes Project pilot studies, and targeted genomic sequencing derived from very large sample sizes reveals an abundance of rare and private variants. One implication of this realization is that recent mutation may have a greater influence on disease susceptibility or protection than is conferred by variations that arose in distant ancestors. PMID:21962505

  20. Safety programmes in the Egyptian construction industry.

    PubMed

    Hassanein, Amr A G; Hanna, Ragaa S

    2007-12-01

    This study is aimed at exploring the nature of the safety programmes applied by large-size contractors operating in Egypt. Results revealed that safety programmes applied by those contractors were less formal than the programmes applied by their American counterparts. Only three contractors out of the surveyed sample had accident records broken down by projects, provided workers with formal safety orientation, and trained safety personnel on first-aid. The study recommended that reforms to the scheme of the employers' contribution to social insurance are necessary. This is meant to serve as a strong incentive for safety management.

  1. The Sloan Digital Sky Survey Reverberation Mapping Project: Composite Lags at z ≤ 1

    NASA Astrophysics Data System (ADS)

    Li, Jennifer; Shen, Yue; Horne, Keith; Brandt, W. N.; Greene, Jenny E.; Grier, C. J.; Ho, Luis C.; Kochanek, Chris; Schneider, Donald P.; Trump, Jonathan R.; Dawson, Kyle S.; Pan, Kaike; Bizyaev, Dmitry; Oravetz, Daniel; Simmons, Audrey; Malanushenko, Elena

    2017-09-01

    We present composite broad-line region (BLR) reverberation mapping lag measurements for Hα, Hβ, He II λ4686, and Mg II for a sample of 144, z ≲ 1 quasars from the Sloan Digital Sky Survey Reverberation Mapping (SDSS-RM) project. Using only the 32-epoch spectroscopic light curves in the first six-month season of SDSS-RM observations, we compile correlation function measurements for individual objects and then coadd them to allow the measurement of the average lags for our sample at mean redshifts of 0.4 (for Hα) and ˜0.65 (for the other lines). At similar quasar luminosities and redshifts, the sample-averaged lag decreases in the order of Mg II, Hα, Hβ, and He II. This decrease in lags is accompanied by an increase in the mean line width of the four lines, and is roughly consistent with the virialized motion for BLR gas in photoionization equilibrium. These are among the first RM measurements of stratified BLR structure at z > 0.3. Dividing our sample by luminosity, Hα shows clear evidence of increasing lags with luminosity, consistent with the expectation from the measured BLR size-luminosity relation based on Hβ. The other three lines do not show a clear luminosity trend in their average lags due to the limited dynamic range of luminosity probed and the poor average correlation signals in the divided samples, a situation that will be improved with the incorporation of additional photometric and spectroscopic data from SDSS-RM. We discuss the utility and caveats of composite lag measurements for large statistical quasar samples with reverberation mapping data.

  2. Concentrations of selected constituents in surface-water and streambed-sediment samples collected from streams in and near an area of oil and natural-gas development, south-central Texas, 2011-13

    USGS Publications Warehouse

    Opsahl, Stephen P.; Crow, Cassi L.

    2014-01-01

    During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.

  3. HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE

    NASA Technical Reports Server (NTRS)

    De, Salvo L. J.

    1994-01-01

    HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.

  4. Current information technology needs of small to medium sized apparel manufacturers and contractors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wipple, C.; Vosti, E.

    1997-11-01

    This report documents recent efforts of the American Textile Partnership (AMTEX) Demand Activated Manufacturing Architecture (DAMA) Project to address needs that are characterized of small to medium sized apparel manufactures and contractors. Background on the AMTEX/DAMA project and objectives for this specific efforts are discussed.

  5. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  6. Analysis of Sample Size, Counting Time, and Plot Size from an Avian Point Count Survey on Hoosier National Forest, Indiana

    Treesearch

    Frank R. Thompson; Monica J. Schwalbach

    1995-01-01

    We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...

  7. Climate-induced lake drying causes heterogeneous reductions in waterfowl species richness

    USGS Publications Warehouse

    Roach, Jennifer K.; Griffith, Dennis B.

    2015-01-01

    ContextLake size has declined on breeding grounds for international populations of waterfowl.ObjectivesOur objectives were to (1) model the relationship between waterfowl species richness and lake size; (2) use the model and trends in lake size to project historical, contemporary, and future richness at 2500+ lakes; (3) evaluate mechanisms for the species–area relationship (SAR); and (4) identify species most vulnerable to shrinking lakes.MethodsMonte Carlo simulations of the richness model were used to generate projections. Correlations between richness and both lake size and habitat diversity were compared to identify mechanisms for the SAR. Patterns of nestedness were used to identify vulnerable species.ResultsSpecies richness was greatest at lakes that were larger, closer to rivers, had more wetlands along their perimeters and were within 5 km of a large lake. Average richness per lake was projected to decline by 11 % from 1986 to 2050 but was heterogeneous across sub-regions and lakes. Richness in sub-regions with species-rich lakes was projected to remain stable, while richness in the sub-region with species-poor lakes was projected to decline. Lake size had a greater effect on richness than did habitat diversity, suggesting that large lakes have more species because they provide more habitat but not more habitat types. The vulnerability of species to shrinking lakes was related to species rarity rather than foraging guild.ConclusionsOur maps of projected changes in species richness and rank-ordered list of species most vulnerable to shrinking lakes can be used to identify targets for conservation or monitoring.

  8. 24 CFR 266.200 - Eligible projects.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Eligible projects. 266.200 Section... FINANCE AGENCY RISK-SHARING PROGRAM FOR INSURED AFFORDABLE MULTIFAMILY PROJECT LOANS Program Requirements § 266.200 Eligible projects. (a) Minimum project size. Projects insured under this part must consist of...

  9. 24 CFR 266.200 - Eligible projects.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Eligible projects. 266.200 Section... FINANCE AGENCY RISK-SHARING PROGRAM FOR INSURED AFFORDABLE MULTIFAMILY PROJECT LOANS Program Requirements § 266.200 Eligible projects. (a) Minimum project size. Projects insured under this part must consist of...

  10. 24 CFR 266.200 - Eligible projects.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Eligible projects. 266.200 Section... FINANCE AGENCY RISK-SHARING PROGRAM FOR INSURED AFFORDABLE MULTIFAMILY PROJECT LOANS Program Requirements § 266.200 Eligible projects. (a) Minimum project size. Projects insured under this part must consist of...

  11. 24 CFR 266.200 - Eligible projects.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Eligible projects. 266.200 Section... FINANCE AGENCY RISK-SHARING PROGRAM FOR INSURED AFFORDABLE MULTIFAMILY PROJECT LOANS Program Requirements § 266.200 Eligible projects. (a) Minimum project size. Projects insured under this part must consist of...

  12. Medical, psychological and socioeconomic aspects of aging in Poland: assumptions and objectives of the PolSenior project.

    PubMed

    Bledowski, Piotr; Mossakowska, Malgorzata; Chudek, Jerzy; Grodzicki, Tomasz; Milewicz, Andrzej; Szybalska, Aleksandra; Wieczorowska-Tobis, Katarzyna; Wiecek, Andrzej; Bartoszek, Adam; Dabrowski, Andrzej; Zdrojewski, Tomasz

    2011-12-01

    Both descriptive and longitudinal studies of aging are nowadays a subject of growing interest in different countries worldwide. However, in Poland and other Central-Eastern European countries, such comprehensive, nationally representative, multidimensional studies were never performed in the past in elderly population. The present paper describes the PolSenior project including its objectives, sample selection and structure, methods, fieldwork procedures and study flow. The aim of the project was to examine medical, psychological and socioeconomic aspects of aging in Poland. The research sample included 5695 respondents (2899 males and 2796 females) split into six equally sized age groups of elderly individuals (65-69 years, 70-74 years, 75-79 years, 80-84 years, 85-89 years, 90+years) and one group of subjects just about to enter old age (55-59 years). Subjects were recruited using three stage stratified, proportional draw. The response rate was 42% and ranged from 32% to 61% between provinces. The study consisted of three visits performed by trained nurses including questionnaire survey, comprehensive geriatric assessment and blood and urine sampling. The questionnaire consisted of medical and specific socioeconomic questions. The comprehensive geriatric assessment included blood pressure and anthropometric measurements, as well as selected scales and tests routinely used in the examination of elderly subjects. Blood and urine samples were collected from 4737 and 4526 individuals, respectively. More than 50 biochemical parameters were measured, and DNA was isolated and banked. In a selected group of 1018 subjects, a medical examination by a physician was performed. The self-rated health was lower in females than in males in age groups 70-84, but similar in individuals of both sexes aged 65-69 and 85 years. Besides providing data on health and functioning of elderly population, the PolSenior project aims to analyze interrelationships between different elements of health and social status, and between genetics and health status in advanced age. The results of the PolSenior project will facilitate prioritizing the state's public health and social policies in elderly population. Such a program provides also an excellent starting point for longitudinal studies and a basis for comparative analysis between Poland and other European countries or regions. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Design of a digital phantom population for myocardial perfusion SPECT imaging research.

    PubMed

    Ghaly, Michael; Du, Yong; Fung, George S K; Tsui, Benjamin M W; Links, Jonathan M; Frey, Eric

    2014-06-21

    Digital phantoms and Monte Carlo (MC) simulations have become important tools for optimizing and evaluating instrumentation, acquisition and processing methods for myocardial perfusion SPECT (MPS). In this work, we designed a new adult digital phantom population and generated corresponding Tc-99m and Tl-201 projections for use in MPS research. The population is based on the three-dimensional XCAT phantom with organ parameters sampled from the Emory PET Torso Model Database. Phantoms included three variations each in body size, heart size, and subcutaneous adipose tissue level, for a total of 27 phantoms of each gender. The SimSET MC code and angular response functions were used to model interactions in the body and the collimator-detector system, respectively. We divided each phantom into seven organs, each simulated separately, allowing use of post-simulation summing to efficiently model uptake variations. Also, we adapted and used a criterion based on the relative Poisson effective count level to determine the required number of simulated photons for each simulated organ. This technique provided a quantitative estimate of the true noise in the simulated projection data, including residual MC simulation noise. Projections were generated in 1 keV wide energy windows from 48-184 keV assuming perfect energy resolution to permit study of the effects of window width, energy resolution, and crosstalk in the context of dual isotope MPS. We have developed a comprehensive method for efficiently simulating realistic projections for a realistic population of phantoms in the context of MPS imaging. The new phantom population and realistic database of simulated projections will be useful in performing mathematical and human observer studies to evaluate various acquisition and processing methods such as optimizing the energy window width, investigating the effect of energy resolution on image quality and evaluating compensation methods for degrading factors such as crosstalk in the context of single and dual isotope MPS.

  14. Design of a digital phantom population for myocardial perfusion SPECT imaging research

    NASA Astrophysics Data System (ADS)

    Ghaly, Michael; Du, Yong; Fung, George S. K.; Tsui, Benjamin M. W.; Links, Jonathan M.; Frey, Eric

    2014-06-01

    Digital phantoms and Monte Carlo (MC) simulations have become important tools for optimizing and evaluating instrumentation, acquisition and processing methods for myocardial perfusion SPECT (MPS). In this work, we designed a new adult digital phantom population and generated corresponding Tc-99m and Tl-201 projections for use in MPS research. The population is based on the three-dimensional XCAT phantom with organ parameters sampled from the Emory PET Torso Model Database. Phantoms included three variations each in body size, heart size, and subcutaneous adipose tissue level, for a total of 27 phantoms of each gender. The SimSET MC code and angular response functions were used to model interactions in the body and the collimator-detector system, respectively. We divided each phantom into seven organs, each simulated separately, allowing use of post-simulation summing to efficiently model uptake variations. Also, we adapted and used a criterion based on the relative Poisson effective count level to determine the required number of simulated photons for each simulated organ. This technique provided a quantitative estimate of the true noise in the simulated projection data, including residual MC simulation noise. Projections were generated in 1 keV wide energy windows from 48-184 keV assuming perfect energy resolution to permit study of the effects of window width, energy resolution, and crosstalk in the context of dual isotope MPS. We have developed a comprehensive method for efficiently simulating realistic projections for a realistic population of phantoms in the context of MPS imaging. The new phantom population and realistic database of simulated projections will be useful in performing mathematical and human observer studies to evaluate various acquisition and processing methods such as optimizing the energy window width, investigating the effect of energy resolution on image quality and evaluating compensation methods for degrading factors such as crosstalk in the context of single and dual isotope MPS.

  15. Restoring Sustainable Forests on Appalachian Mined Lands for Wood Products, Renewable Energy, Carbon Sequestration, and Other Ecosystem Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burger, James A

    2005-07-20

    The overall purpose of this project is to evaluate the biological and economic feasibility of restoring high-quality forests on mined land, and to measure carbon sequestration and wood production benefits that would be achieved from forest restoration procedures. We are currently estimating the acreage of lands in Virginia, West Virginia, Kentucky, Ohio, and Pennsylvania mined under SMCRA and reclaimed to non-forested post-mining land uses that are not currently under active management, and therefore can be considered as available for carbon sequestration. To determine actual sequestration under different forest management scenarios, a field study was installed as a 3 x 3 factorial in a random complete block design with three replications at each of three locations, one each in Ohio, West Virginia, and Virginia. The treatments included three forest types (white pine, hybrid poplar, mixed hardwood) and three silvicultural regimes (competition control, competition control plus tillage, competition control plus tillage plus fertilization). Each individual treatment plot is 0.5 acres. Each block of nine plots is 4.5 acres, and the complete installation at each site is 13.5 acres. During the reporting period we determined that by grinding the soil samples to a finer particle size of less than 250 μm (sieve No. 60), the effect of mine soil coal particle size on the extent to which these particles will be oxidized during the thermal treatment of the carbon partitioning procedure will be eliminated, thus making the procedure more accurate and precise. In the second phase of the carbon sequestration project, we focused our attention on determining the sample size required for carbon accounting on grassland mined fields in order to achieve a desired accuracy and precision of the final soil organic carbon (SOC) estimate. A mine land site quality classification scheme was developed and some field-testing of the methods of implementation was completed. The classification model has been validated for softwoods (white pine) on several reclaimed mine sites in the southern Appalachian coal region. The classification model is a viable method for classifying post-SMCRA abandoned mined lands into productivity classes for white pine. A thinning study was established as a random complete block design to evaluate the response to thinning of a 26-year-old white pine stand growing on a reclaimed surface mine in southwest Virginia. Stand parameters were projected to age 30 using a stand table projection. Site index of the stand was found to be 32.3 m at base age 50 years. Thinning rapidly increased the diameter growth of the residual trees to 0.84 cm yr{sup -1} compared to 0.58 cm yr{sup -1} for the unthinned treatment; however, at age 26, there was no difference in volume or value per hectare. At age 30, the unthinned treatment had a volume of 457.1 m{sup 3} ha{sup -1} but was only worthmore » $$8807 ha{sup -1}, while the thinned treatment was projected to have 465.8 m{sup 3} ha{sup -1}, which was worth $$11265 ha{sup -1} due to a larger percentage of the volume being in sawtimber size classes.« less

  16. 7 CFR 51.1406 - Sample for grade or size determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...

  17. Bayesian probabilistic population projections for all countries

    PubMed Central

    Raftery, Adrian E.; Li, Nan; Ševčíková, Hana; Gerland, Patrick; Heilig, Gerhard K.

    2012-01-01

    Projections of countries’ future populations, broken down by age and sex, are widely used for planning and research. They are mostly done deterministically, but there is a widespread need for probabilistic projections. We propose a Bayesian method for probabilistic population projections for all countries. The total fertility rate and female and male life expectancies at birth are projected probabilistically using Bayesian hierarchical models estimated via Markov chain Monte Carlo using United Nations population data for all countries. These are then converted to age-specific rates and combined with a cohort component projection model. This yields probabilistic projections of any population quantity of interest. The method is illustrated for five countries of different demographic stages, continents and sizes. The method is validated by an out of sample experiment in which data from 1950–1990 are used for estimation, and applied to predict 1990–2010. The method appears reasonably accurate and well calibrated for this period. The results suggest that the current United Nations high and low variants greatly underestimate uncertainty about the number of oldest old from about 2050 and that they underestimate uncertainty for high fertility countries and overstate uncertainty for countries that have completed the demographic transition and whose fertility has started to recover towards replacement level, mostly in Europe. The results also indicate that the potential support ratio (persons aged 20–64 per person aged 65+) will almost certainly decline dramatically in most countries over the coming decades. PMID:22908249

  18. Regional haze case studies in the southwestern U.S—I. Aerosol chemical composition

    NASA Astrophysics Data System (ADS)

    Macias, Edward S.; Zwicker, Judith O.; Ouimette, James R.; Hering, Susanne V.; Friedlander, Sheldon K.; Cahill, Thomas A.; Kuhlmey, Gregory A.; Richards, L. Willard

    Aerosol chemical composition as a function of particle size was determined in the southwestern U.S.A. during four weeks of sampling in June, July and December, 1979 as a part of project VISITA. Samples were collected at two ground stations about 80 km apart near Page (AZ) and in two aircraft flying throughout the region. Several different size separating aerosol samplers and chemical analysis procedures were intercompared and were used in determining the size distribution and elemental composition of the aerosol. Sulfur was shown to be in the form of water soluable sulfate, highly correlated with ammonium ion, and with an average [NH +4]/[SO 2-4] molar ratio of 1.65. During the summer sampling period, three distinct regimes were observed, each with a different aerosol composition. The first, 24 h sampling ending 30 June, was characterized by a higher than average value of light scattering due to particles (b sp) of 24 × 10 -6m-1 and a fine particulate mass ( Mf) of 8.5 μg m -1. The fine particle aerosol was dominated by sulfate and carbon. Aircraft measurements showed the aerosol was homogeneous throughout the region at that time. The second regime, 5 July, had the highest average bsp of 51 × 10 -6m -1 during the sampling period with Mf of 3.2 μgm -3. The fine particle aerosol had nearly equal concentrations of carbon and ammonium sulfate. For all three regimes, enrichment factor analysis indicated fine and coarse particle Cu, Zn, Cl, Br, and Pb and fine particle K were enriched above crustal concentrations relative to Fe, indicating that these elements were present in the aerosol from sources other than wind blown dust. Particle extinction budgets calculated for the three regimes indicated that fine particles contributed most significantly, with carbon and (NH 4) 2SO 4 making the largest contributions. Fine particle crustal elements including Si did not contribute significantly to the extinction budget during this study. The December sampling was characterized by very light fine particle loading with two regimes identified. One regime had higher fine mass and sulfate concentrations while the other had low values for all species measured.

  19. The development of miniplex primer sets for the analysis of degraded DNA

    NASA Astrophysics Data System (ADS)

    McCord, Bruce; Opel, Kerry; Chung, Denise; Drabek, Jiri; Tatarek, Nancy; Meadows Jantz, Lee; Butler, John

    2005-05-01

    In this project, a new set of multiplexed PCR reactions has been developed for the analysis of degraded DNA. These DNA markers, known as Miniplexes, utilize primers that have shorter amplicons for use in short tandem repeat (STR) analysis of degraded DNA. In our work we have defined six of these new STR multiplexes, each of which consists of 3 to 4 reduced size STR loci, and each labeled with a different fluorescent dye. When compared to commercially available STR systems, reductions in size of up to 300 basepairs are possible. In addition, these newly designed amplicons consist of loci that are fully compatible with the the national computer DNA database known as CODIS. To demonstrate compatibility with commercial STR kits, a concordance study of 532 DNA samples of Caucasian, African American, and Hispanic origin was undertaken There was 99.77% concordance between allele calls with the two methods. Of these 532 samples, only 15 samples showed discrepancies at one of 12 loci. These occurred predominantly at 2 loci, vWA and D13S317. DNA sequencing revealed that these locations had deletions between the two primer binding sites. Uncommon deletions like these can be expected in certain samples and will not affect the utility of the Miniplexes as tools for degraded DNA analysis. The Miniplexes were also applied to enzymatically digested DNA to assess their potential in degraded DNA analysis. The results demonstrated a greatly improved efficiency in the analysis of degraded DNA when compared to commercial STR genotyping kits. A series of human skeletal remains that had been exposed to a variety of environmental conditions were also examined. Sixty-four percent of the samples generated full profiles when amplified with the Miniplexes, while only sixteen percent of the samples tested generated full profiles with a commercial kit. In addition, complete profiles were obtained for eleven of the twelve Miniplex loci which had amplicon size ranges less than 200 base pairs. These data clearly demonstrate that smaller PCR amplicons provide an attractive alternative to mitochondrial DNA for forensic analysis of degraded DNA.

  20. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    PubMed

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  1. McGee Mountain Geoprobe Survey, Humboldt County, Nevada

    DOE Data Explorer

    Richard Zehner

    2010-01-01

    This shapefile contains location and attribute data for a Geoprobe temperature survey conducted by Geothermal Technical Partners, Inc. during 2010. The purpose of direct push technology (“DPT”) probe activity at the McGee Mtn. Project, Nevada was to 1) determine bottom hole temperatures using nominal 1.5 inch probe tooling to place resistance temperature detectors (“RTD”) and 2) take water samples, if possible, to characterize the geothermometry of the system. A total of 23 holes were probed in five days for a cumulative total of 857.5 ft. at 21 sites at McGee Mountain. The probed holes ranged in depth from a maximum of 75 ft to a minimum of 10 ft and averaged 37.3ft. The average temperature of the 23 holes was 18.9⁰C, with a range of 12.0⁰C at site MMTG#1b to 42.0⁰C at site MMTG#19. . No water was encountered in any of the probed holes, with the exception of MMTG#10, and no water was collected for sampling. Zip file containing Arcview shapefile in UTM11 NAD83 projection. 5kb file size.

  2. Assessment of Physical and Mechanical Properties of Cement Panel Influenced by Treated and Untreated Coconut Fiber Addition

    NASA Astrophysics Data System (ADS)

    Abdullah, Alida; Jamaludin, Shamsul Baharin; Anwar, Mohamed Iylia; Noor, Mazlee Mohd; Hussin, Kamarudin

    This project was conducted to produce a cement panel with the addition of treated and untreated coconut fiber in cement panel. Coconut fiber was added to replace coarse aggregate (sand) in this cement panel. In this project, the ratios used to design the mixture were 1:1:0, 1:0.97:0.03, 1:0.94:0.06, 1:0.91:0.09 (cement: sand: coconut fiber). The water cement ratio was constant at 0.55. The sizes of sample tested were, 160 mm x 40 mm x 40 mm for compression test, and 100 mm x 100 mm x 40 mm for density, moisture content and water absorption tests. After curing samples for 28 days, it was found that the addition of coconut fiber, further increase in compressive strength of cement panel with untreated coconut fiber. Moisture content of cement panel with treated coconut fiber increased with increasing content of coconut fiber whereas water absorption of cement panel with untreated coconut fiber increased with increasing content of coconut fiber. The density of cement panel decreased with the addition of untreated and treated coconut fiber.

  3. Data for ground-water test hole near Zamora, Central Valley Aquifer Project, California

    USGS Publications Warehouse

    French, J.J.; Page, R.W.; Bertoldi, G.L.

    1982-01-01

    Preliminary data are presented for the first of seven test holes drilled as a part of the Central Valley Aquifer Project which is part of the National Regional Aquifer Systems Analysis Program. The test hole was drilled in the SW 1/4 SE 1/4 sec. 34, T. 12 N. , R. 1 E., Yolo County, California, about 3 miles northeast of the town of Zamora. Drilled to a depth of 2,500 feet below land surface, the hole is cased to a depth of 190 feet and equipped with three piezometer tubes to depths of 947, 1,401, and 2,125 feet. A 5-foot well screen is at the bottom of each piezometer. Eighteen cores and 68 sidewall cores were recovered. Laboratory tests were made for mineralogy, hydraulic conductivity, porosity , consolidation, grain-size distribution, Atterberg limits, X-ray diffraction, diatom identification, thermal conductivity, and chemical analysis of water. Geophysical and thermal gradient logs were made. The hole is sampled periodically for chemical analysis and measured for water level in the three tapped zones. This report presents methods used to obtain field samples, laboratory procedures, and the data obtained. (USGS)

  4. Improved Use of Small Reference Panels for Conditional and Joint Analysis with GWAS Summary Statistics.

    PubMed

    Deng, Yangqing; Pan, Wei

    2018-06-01

    Due to issues of practicality and confidentiality of genomic data sharing on a large scale, typically only meta- or mega-analyzed genome-wide association study (GWAS) summary data, not individual-level data, are publicly available. Reanalyses of such GWAS summary data for a wide range of applications have become more and more common and useful, which often require the use of an external reference panel with individual-level genotypic data to infer linkage disequilibrium (LD) among genetic variants. However, with a small sample size in only hundreds, as for the most popular 1000 Genomes Project European sample, estimation errors for LD are not negligible, leading to often dramatically increased numbers of false positives in subsequent analyses of GWAS summary data. To alleviate the problem in the context of association testing for a group of SNPs, we propose an alternative estimator of the covariance matrix with an idea similar to multiple imputation. We use numerical examples based on both simulated and real data to demonstrate the severe problem with the use of the 1000 Genomes Project reference panels, and the improved performance of our new approach. Copyright © 2018 by the Genetics Society of America.

  5. Evaluation of dredged material proposed for ocean disposal from Shark River Project area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antrim, L.D.; Gardiner, W.W.; Barrows, E.S.

    1996-09-01

    The objective of the Shark River Project was to evaluate proposed dredged material to determine its suitability for unconfined ocean disposal at the Mud Dump Site. Tests and analyses were conducted on the Shark River sediments. The evaluation of proposed dredged material consisted of bulk sediment chemical and physical analysis, chemical analyses of dredging site water and elutriate, water-column and benthic acute toxicity tests, and bioaccumulation tests. Individual sediment core samples collected from the Shark River were analyzed for grain size, moisture content, and total organic carbon (TOC). One sediment composite was analyzed for bulk density, specific gravity, metals, chlorinatedmore » pesticides, polychlorinated biphenyl (PCB) congeners, polynuclear aromatic hydrocarbons (PAHs), and 1,4- dichlorobenzene. Dredging site water and elutriate, prepared from suspended-particulate phase (SPP) of the Shark River sediment composite, were analyzed for metals, pesticides, and PCBs. Benthic acute toxicity tests and bioaccumulation tests were performed.« less

  6. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  7. Microbial utilization of nitrogen in cold core eddies: size does matter

    NASA Astrophysics Data System (ADS)

    McInnes, A.; Messer, L. F.; Laiolo, L.; Laverock, B.; Laczka, O.; Brown, M. V.; Seymour, J.; Doblin, M.

    2016-02-01

    As the base of the marine food web, and the first step in the biological carbon pump, understanding changes in microbial community composition is essential for predicting changes in the marine nitrogen (N) cycle. Climate change projections suggest that oligotrophic waters will become more stratified with a concomitant shift in microbial community composition based on changes in N supply. In regions of strong boundary currents, eddies could reduce this limitation through nutrient uplift and other forms of eddy mixing. Understanding the preference for different forms of N by microbes is essential for understanding and predicting shifts in the microbial community. This study aims to understand the utilization of different N species within different microbial size fractions as well as understand the preferred source of N to these groups across varying mesoscale and sub-mesoscale features in the East Australian Current (EAC). In June 2015 we sampled microbial communities from three depths (surface, chlorophyll-a maximum and below the mixed layer), in three mesoscale and sub-mesoscale eddy features, as well as two end-point water masses (coastal and oligotrophic EAC water). Particulate matter was analysed for stable C and N isotopes, and seawater incubations with trace amounts of 15NO3, 15NH4, 15N2, 15Urea and 13C were undertaken. All samples were size fractionated into 0.3-2.0 µm, 2.0-10 µm, and >10 µm size classes, encompassing the majority of microbes in these waters. Microbial community composition was also assessed (pigments, flow cytometry, DNA), as well as physical and chemical parameters, to better understand the drivers of carbon fixation and nitrogen utilization across a diversity of water masses and microbial size classes. We observed that small, young features have a greater abundance of larger size classes. We therefore predict that these microbes will preferentially draw down the recently pulsed NO3. Ultimately, the size and age of a feature will determine the N compound utilization and microbial community composition and as the feature grows in size and age a community succession will lead to differential more diverse N compound utilization.

  8. The ICF Core Sets for hearing loss--researcher perspective. Part I: Systematic review of outcome measures identified in audiological research.

    PubMed

    Granberg, Sarah; Dahlström, Jennie; Möller, Claes; Kähäri, Kim; Danermark, Berth

    2014-02-01

    To review the literature in order to identify outcome measures used in research on adults with hearing loss (HL) as part of the ICF Core Sets development project, and to describe study and population characteristics of the reviewed studies. A systematic review methodology was applied using multiple databases. A comprehensive search was conducted and two search pools were created, pool I and pool II. The study population included adults (≥ 18 years of age) with HL and oral language as the primary mode of communication. 122 studies were included. Outcome measures were distinguished by 'instrument type', and 10 types were identified. In total, 246 (pool I) and 122 (pool II) different measures were identified, and only approximately 20% were extracted twice or more. Most measures were related to speech recognition. Fifty-one different questionnaires were identified. Many studies used small sample sizes, and the sex of participants was not revealed in several studies. The low prevalence of identified measures reflects a lack of consensus regarding the optimal outcome measures to use in audiology. Reflections and discussions are made in relation to small sample sizes and the lack of sex differentiation/descriptions within the included articles.

  9. Methods for the behavioral, educational, and social sciences: an R package.

    PubMed

    Kelley, Ken

    2007-11-01

    Methods for the Behavioral, Educational, and Social Sciences (MBESS; Kelley, 2007b) is an open source package for R (R Development Core Team, 2007b), an open source statistical programming language and environment. MBESS implements methods that are not widely available elsewhere, yet are especially helpful for the idiosyncratic techniques used within the behavioral, educational, and social sciences. The major categories of functions are those that relate to confidence interval formation for noncentral t, F, and chi2 parameters, confidence intervals for standardized effect sizes (which require noncentral distributions), and sample size planning issues from the power analytic and accuracy in parameter estimation perspectives. In addition, MBESS contains collections of other functions that should be helpful to substantive researchers and methodologists. MBESS is a long-term project that will continue to be updated and expanded so that important methods can continue to be made available to researchers in the behavioral, educational, and social sciences.

  10. A longitudinal examination of factors associated with social support satisfaction among HIV-positive young Black men who have sex with men.

    PubMed

    McCullagh, Charlotte; Quinn, Katherine; Voisin, Dexter R; Schneider, John

    2017-12-01

    This study examined the long-term predictors of social support satisfaction among HIV-positive young Black men who have sex with men (YBMSM). Data were collected across three waves between October 2012 and November 2014 as part of the baseline assessment from Project nGage, a preliminary efficacy randomized control study examining the role of social support in improving HIV care among YBMSM. The sample included 92 YBMSM aged 18-29. Major results controlling for age, education and intervention effects indicated that psychological health, social network size, and education at baseline predicted differences in social support satisfaction at Wave 3, with no significant effects based on length of HIV diagnosis. Therefore, interventions that are intended to promote the quality of life for YBMSM and their engagement and retention in HIV care must focus on their psychological health concerns and network size.

  11. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  13. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  14. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  16. Surface-water-quality assessment of the lower Kansas River basin, Kansas and Nebraska; project data November 1986 through April 1990

    USGS Publications Warehouse

    Fallon, J.D.; McChesney, J.A.

    1993-01-01

    Surface-water-quality data were collected from the lower Kansas River Basin in Kansas and Nebraska. The data are presented in 17 tables consisting of physical properties, concentrations of dissolved solids and major ions, dissolved and total nutrients, dissolved and total major metals and trace elements, radioactivity, organic carbon, pesticides and other synthetic-organic compounds, bacteria and chlorophyll-a, in water; particle-size distributions and concentrations of major metals and trace elements in suspended and streambed sediment; and concentrations of synthetic-organic compounds in streambed sediment. The data are grouped within each table by sampling sites, arranged in downstream order. Ninety-one sites were sampled in the study area. These sampling sites are classified in three, non-exclusive categories (fixed, synoptic, and miscellaneous sites) on the basis of sampling frequency and location. Sampling sites are presented on a plate and in 3 tables, cross-referenced by downstream order, alphabetical order, U.S. Geological Survey identification number, sampling-site classification category, and types of analyses performed at each site. The methods used to collect, analyze, and verify the accuracy of the data also are presented. (USGS)

  17. An Overview of the Thermal Challenges of Designing Microgravity Furnaces

    NASA Technical Reports Server (NTRS)

    Westra, Douglas G.

    2001-01-01

    Marshall Space Flight Center is involved in a wide variety of microgravity projects that require furnaces, with hot zone temperatures ranging from 300 C to 2300 C, requirements for gradient processing and rapid quench, and both semi-conductor and metal materials. On these types of projects, the thermal engineer is a key player in the design process. Microgravity furnaces present unique challenges to the thermal designer. One challenge is designing a sample containment assembly that achieves dual containment, yet allows a high radial heat flux. Another challenge is providing a high axial gradient but a very low radial gradient. These furnaces also present unique challenges to the thermal analyst. First, there are several orders of magnitude difference in the size of the thermal 'conductors' between various parts of the model. A second challenge is providing high fidelity in the sample model, and connecting the sample with the rest of the furnace model, yet maintaining some sanity in the number of total nodes in the model. The purpose of this paper is to present an overview of the challenges involved in designing and analyzing microgravity furnaces and how some of these challenges have been overcome. The thermal analysis tools presently used to analyze microgravity furnaces and will be listed. Challenges for the future and a description of future analysis tools will be given.

  18. Sample size determination in group-sequential clinical trials with two co-primary endpoints

    PubMed Central

    Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

    2014-01-01

    We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

  19. Parameterization of volcanic ash remobilization by wind-tunnel erosion experiments.

    NASA Astrophysics Data System (ADS)

    Del Bello, Elisabetta; Taddeucci, Jacopo; Merrison, Jonathan; Alois, Stefano; Iversen, Jens Jacob; Scarlato, Piergiorgio

    2017-04-01

    The remobilization of volcanic ash from the ground is one of the many problems posing threat to life and infrastructures during and after the course of an explosive volcanic eruption. A proper management of the risks connected to this problem requires a thorough understanding of the factors that influence and promote the dispersal of particles over large distances. Towards this target, we conducted a series of experiments aimed at defining first-order processes controlling the remobilization threshold of ash particles by wind erosion. In the framework of the EU-funded Europlanet project, we joinly used the environmental wind tunnel facility at Aarhus University (DK) and the state-of-the art high-speed imaging equipment of INGV experimental lab (Italy) to capture at unparalleled temporal and spatial resolution the removal dynamics of ash-sized (half-millimetre to micron-sized) particles. A homogenous layer of particles was set at on a plate placed downwind a boundary layer setup. Resuspension processes were filmed at 2000 fps and 50 micron pixel resolution, and the plate weighted pre and post-experiment. Explored variables include: 1) wind speed (from ca. 1 to 7 m/s) and boundary layer structure; 2) particle grain size (from 32-63 to 90-125 micron), and sample sorting); 3) chemical and textural features, using basalt and trachyte samples from Campi Flegrei (Pomici Principali,10 ka) and Eyjafjallajökull (May 2010) eruptions; and 4) temperature and humidity, by conducting experiments either at ambient conditions or with a heated sample. We found that the grain size distribution exerts a strong control on the fundamental dynamics of gas-particle coupling. Particles > 90 micron detach from the particles layer individually, also entering the gas flow individually. Conversely, removal < 63 micron particles occurs in clumps of aggregates. These clumps, once taken in charge by the gas flow, are frequently disaggregated and dispersed rapidly (order of few milliseconds). Our preliminary results shows that, for a given size distribution, the boundary between the two dynamics may shift greatly as a function of ambient humidity.

  20. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  1. Determinants of the cost of capital for privately financed hospital projects in the UK.

    PubMed

    Colla, Paolo; Hellowell, Mark; Vecchi, Veronica; Gatti, Stefano

    2015-11-01

    Many governments make use of private finance contracts to deliver healthcare infrastructure. Previous work has shown that the rate of return to investors in these markets often exceeds the efficient level. Our focus is on the factors that influence that return. We examine the effect of macroeconomic, project- and firm-level variables using a detailed sample of 84 UK private finance initiative (PFI) contracts signed between 1997 and 2010. Of the above variables, macroeconomic conditions and lead sponsor size are related to the investor return. However, our results show a remarkable degree of stability in the return to investors over the 14-year period. We find evidence of a 'prevailing norm' that is robust to project- and firm-level variation. The sustainability of excess returns over a long period is indicative of a concentrated market structure. We argue that policymakers should consider new mechanisms for increasing competition in the equity market, while ensuring that authorities have the specialist resources required to negotiate efficient contract prices. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. An epidemiological approach to welfare research in zoos: the Elephant Welfare Project.

    PubMed

    Carlstead, Kathy; Mench, Joy A; Meehan, Cheryl; Brown, Janine L

    2013-01-01

    Multi-institutional studies of welfare have proven to be valuable in zoos but are hampered by limited sample sizes and difficulty in evaluating more than just a few welfare indicators. To more clearly understand how interactions of husbandry factors influence the interrelationships among welfare outcomes, epidemiological approaches are needed as well as multifactorial assessments of welfare. Many questions have been raised about the housing and care of elephants in zoos and whether their environmental and social needs are being met in a manner that promotes good welfare. This article describes the background and rationale for a large-scale study of elephant welfare in North American zoos funded by the (U.S.) Institute of Museum and Library Services. The goals of this project are to document the prevalence of positive and negative welfare states in 291 elephants exhibited in 72 Association of Zoos and Aquariums zoos and then determine the environmental, management, and husbandry factors that impact elephant welfare. This research is the largest scale nonhuman animal welfare project ever undertaken by the zoo community, and the scope of environmental variables and welfare outcomes measured is unprecedented.

  3. High-throughput dual-colour precision imaging for brain-wide connectome with cytoarchitectonic landmarks at the cellular level

    PubMed Central

    Gong, Hui; Xu, Dongli; Yuan, Jing; Li, Xiangning; Guo, Congdi; Peng, Jie; Li, Yuxin; Schwarz, Lindsay A.; Li, Anan; Hu, Bihe; Xiong, Benyi; Sun, Qingtao; Zhang, Yalun; Liu, Jiepeng; Zhong, Qiuyuan; Xu, Tonghui; Zeng, Shaoqun; Luo, Qingming

    2016-01-01

    The precise annotation and accurate identification of neural structures are prerequisites for studying mammalian brain function. The orientation of neurons and neural circuits is usually determined by mapping brain images to coarse axial-sampling planar reference atlases. However, individual differences at the cellular level likely lead to position errors and an inability to orient neural projections at single-cell resolution. Here, we present a high-throughput precision imaging method that can acquire a co-localized brain-wide data set of both fluorescent-labelled neurons and counterstained cell bodies at a voxel size of 0.32 × 0.32 × 2.0 μm in 3 days for a single mouse brain. We acquire mouse whole-brain imaging data sets of multiple types of neurons and projections with anatomical annotation at single-neuron resolution. The results show that the simultaneous acquisition of labelled neural structures and cytoarchitecture reference in the same brain greatly facilitates precise tracing of long-range projections and accurate locating of nuclei. PMID:27374071

  4. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency ofmore » individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern Minnesota, and future proposals are pending with non-taconite mineral processing applications.« less

  5. Simulated 'On-Line' Wear Metal Analysis of Lubricating Oils by X-Ray Fluorescence Spectroscopy

    NASA Technical Reports Server (NTRS)

    Kelliher, Warren C.; Partos, Richard D.; Nelson, Irina

    1996-01-01

    The objective of this project was to assess the sensitivity of X-ray Fluorescence Spectroscopy (XFS) for quantitative evaluation of metal particle content in engine oil suspensions and the feasibility of real-time, dynamic wear metal analysis. The study was focused on iron as the majority wear metal component. Variable parameters were: particle size, particle concentration and oil velocity. A commercial XFS spectrometer equipped with interchangeable static/dynamic (flow cell) sample chambers was used. XFS spectra were recorded for solutions of Fe-organometallic standard and for a series of DTE oil suspensions of high purity spherical iron particles of 2g, 4g, and 8g diameter, at concentrations from 5 ppm to 5,000 ppm. Real contaminated oil samples from Langley Air Force Base aircraft engines and NASA Langley Research Center wind tunnels were also analyzed. The experimental data conform the reliability of XFS as the analytical method of choice for this project. Intrinsic inadequacies of the instrument for precise analytic work at low metal concentrations were identified as being related to the particular x-ray beam definition, system geometry, and flow-cell materials selection. This work supports a proposal for the design, construction and testing of a conceptually new, miniature XFS spectrometer with superior performance, dedicated to on-line, real-time monitoring of lubricating oils in operating engines. Innovative design solutions include focalization of the incident x-ray beam, non-metal sample chamber, and miniaturization of the overall assembly. The instrument would contribute to prevention of catastrophic engine failures. A proposal for two-year funding has been presented to NASA Langley Research Center Internal Operation Group (IOG) Management, to continue the effort begun by this summer's project.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir

    Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less

  7. What Have Researchers Learned from Project STAR?

    ERIC Educational Resources Information Center

    Schanzenbach, Diane Whitmore

    2007-01-01

    Project STAR (Student/Teacher Achievement Ratio) was a large-scale randomized trial of reduced class sizes in kindergarten through the third grade. Because of the scope of the experiment, it has been used in many policy discussions. For example, the California statewide class-size-reduction policy was justified, in part, by the successes of…

  8. 7 CFR 1822.271 - Processing applications.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... specific provisions of State law under which the applicant is organized; a copy of the applicant's articles... project. (i) Location and size of tract or tracts to be bought and/or developed. (ii) Number and size of... contribution to the project. (8) A map showing the location of and other supporting information on neighborhood...

  9. 7 CFR 1822.271 - Processing applications.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... specific provisions of State law under which the applicant is organized; a copy of the applicant's articles... project. (i) Location and size of tract or tracts to be bought and/or developed. (ii) Number and size of... contribution to the project. (8) A map showing the location of and other supporting information on neighborhood...

  10. 7 CFR 1822.271 - Processing applications.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... specific provisions of State law under which the applicant is organized; a copy of the applicant's articles... project. (i) Location and size of tract or tracts to be bought and/or developed. (ii) Number and size of... contribution to the project. (8) A map showing the location of and other supporting information on neighborhood...

  11. 7 CFR 1822.271 - Processing applications.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... specific provisions of State law under which the applicant is organized; a copy of the applicant's articles... project. (i) Location and size of tract or tracts to be bought and/or developed. (ii) Number and size of... contribution to the project. (8) A map showing the location of and other supporting information on neighborhood...

  12. Seeing the Light: A Classroom-Sized Pinhole Camera Demonstration for Teaching Vision

    ERIC Educational Resources Information Center

    Prull, Matthew W.; Banks, William P.

    2005-01-01

    We describe a classroom-sized pinhole camera demonstration (camera obscura) designed to enhance students' learning of the visual system. The demonstration consists of a suspended rear-projection screen onto which the outside environment projects images through a small hole in a classroom window. Students can observe these images in a darkened…

  13. Legacy sample disposition project. Volume 2: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurley, R.N.; Shifty, K.L.

    1998-02-01

    This report describes the legacy sample disposition project at the Idaho Engineering and Environmental Laboratory (INEEL), which assessed Site-wide facilities/areas to locate legacy samples and owner organizations and then characterized and dispositioned these samples. This project resulted from an Idaho Department of Environmental Quality inspection of selected areas of the INEEL in January 1996, which identified some samples at the Test Reactor Area and Idaho Chemical Processing Plant that had not been characterized and dispositioned according to Resource Conservation and Recovery Act (RCRA) requirements. The objective of the project was to manage legacy samples in accordance with all applicable environmentalmore » and safety requirements. A systems engineering approach was used throughout the project, which included collecting the legacy sample information and developing a system for amending and retrieving the information. All legacy samples were dispositioned by the end of 1997. Closure of the legacy sample issue was achieved through these actions.« less

  14. Effects of Calibration Sample Size and Item Bank Size on Ability Estimation in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Sahin, Alper; Weiss, David J.

    2015-01-01

    This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…

  15. Sample size calculations for case-control studies

    Cancer.gov

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  16. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; hide

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests. The first flight (Saffire-1) is scheduled for July 2015 with the other two following at six-month intervals. A computer modeling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the first examination of fire behavior on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation.

  17. Elucidating the ensemble of functionally-relevant transitions in protein systems with a robotics-inspired method

    PubMed Central

    2013-01-01

    Background Many proteins tune their biological function by transitioning between different functional states, effectively acting as dynamic molecular machines. Detailed structural characterization of transition trajectories is central to understanding the relationship between protein dynamics and function. Computational approaches that build on the Molecular Dynamics framework are in principle able to model transition trajectories at great detail but also at considerable computational cost. Methods that delay consideration of dynamics and focus instead on elucidating energetically-credible conformational paths connecting two functionally-relevant structures provide a complementary approach. Effective sampling-based path planning methods originating in robotics have been recently proposed to produce conformational paths. These methods largely model short peptides or address large proteins by simplifying conformational space. Methods We propose a robotics-inspired method that connects two given structures of a protein by sampling conformational paths. The method focuses on small- to medium-size proteins, efficiently modeling structural deformations through the use of the molecular fragment replacement technique. In particular, the method grows a tree in conformational space rooted at the start structure, steering the tree to a goal region defined around the goal structure. We investigate various bias schemes over a progress coordinate for balance between coverage of conformational space and progress towards the goal. A geometric projection layer promotes path diversity. A reactive temperature scheme allows sampling of rare paths that cross energy barriers. Results and conclusions Experiments are conducted on small- to medium-size proteins of length up to 214 amino acids and with multiple known functionally-relevant states, some of which are more than 13Å apart of each-other. Analysis reveals that the method effectively obtains conformational paths connecting structural states that are significantly different. A detailed analysis on the depth and breadth of the tree suggests that a soft global bias over the progress coordinate enhances sampling and results in higher path diversity. The explicit geometric projection layer that biases the exploration away from over-sampled regions further increases coverage, often improving proximity to the goal by forcing the exploration to find new paths. The reactive temperature scheme is shown effective in increasing path diversity, particularly in difficult structural transitions with known high-energy barriers. PMID:24565158

  18. The MEarth project: an all-sky survey for transiting Earth-like exoplanets orbiting nearby M-dwarfs

    NASA Astrophysics Data System (ADS)

    Irwin, Jonathan; Berta-Thompson, Zachory K.; Charbonneau, David; Dittmann, Jason; Newton, Elisabeth R.

    2015-01-01

    The MEarth project is an operational all-sky survey searching for transiting Earth-like exoplanets around 3,000 of the closest mid-to-late M-dwarfs. These will be among the best planets in their size class for atmospheric characterization using present day and near-future instruments such as HST, JWST and ground-based Extremely Large Telescopes (ELTs), by virtue of the large observational signal sizes afforded by their small and bright host stars. We present an update on the status and recent scientific results of the survey from our two observing stations: MEarth-North at Fred Lawrence Whipple Observatory, Mount Hopkins, Arizona, and MEarth-South at Cerro Tololo Inter-American Observatory, Chile. MEarth-North discovered the transiting mini-Neptune exoplanet GJ 1214b, which currently has the best-studied atmosphere of any exoplanet in its size class. In addition to searching for planets, we actively pursue stellar astrophysics topics and characterization of the target star sample using MEarth data and supplementary spectroscopic follow-up. This has included measuring astrometric parallaxes for more than 1500 nearby stars, the discovery of 6 new low-mass eclipsing binaries amenable to direct measurement of the masses and radii of their components, and rotation periods, spectral classifications, metallicities and activity indices for hundreds of stars. The MEarth light curves themselves also provide a detailed record of the photometric behavior of the target stars, which include the most favorable and interesting targets to search for small and potentially habitable planets. This will be a valuable resource for all future surveys searching for planets around these stars. All light curves gathered during the survey are made publicly available after one year.The MEarth project gratefully acknowledges funding from the David and Lucile Packard Fellowship for Science and Engineering, the National Science Foundation under grants AST-0807690, AST-1109468, and AST-1004488, and the John Templeton Foundation.

  19. Feasibility of Assessing Public Health Impacts of Air Pollution Reduction Programs on a Local Scale: New Haven Case Study

    PubMed Central

    Lobdell, Danelle T.; Isakov, Vlad; Baxter, Lisa; Touma, Jawad S.; Smuts, Mary Beth; Özkaynak, Halûk

    2011-01-01

    Background New approaches to link health surveillance data with environmental and population exposure information are needed to examine the health benefits of risk management decisions. Objective We examined the feasibility of conducting a local assessment of the public health impacts of cumulative air pollution reduction activities from federal, state, local, and voluntary actions in the City of New Haven, Connecticut (USA). Methods Using a hybrid modeling approach that combines regional and local-scale air quality data, we estimated ambient concentrations for multiple air pollutants [e.g., PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter), NOx (nitrogen oxides)] for baseline year 2001 and projected emissions for 2010, 2020, and 2030. We assessed the feasibility of detecting health improvements in relation to reductions in air pollution for 26 different pollutant–health outcome linkages using both sample size and exploratory epidemiological simulations to further inform decision-making needs. Results Model projections suggested decreases (~ 10–60%) in pollutant concentrations, mainly attributable to decreases in pollutants from local sources between 2001 and 2010. Models indicated considerable spatial variability in the concentrations of most pollutants. Sample size analyses supported the feasibility of identifying linkages between reductions in NOx and improvements in all-cause mortality, prevalence of asthma in children and adults, and cardiovascular and respiratory hospitalizations. Conclusion Substantial reductions in air pollution (e.g., ~ 60% for NOx) are needed to detect health impacts of environmental actions using traditional epidemiological study designs in small communities like New Haven. In contrast, exploratory epidemiological simulations suggest that it may be possible to demonstrate the health impacts of PM reductions by predicting intraurban pollution gradients within New Haven using coupled models. PMID:21335318

  20. Percolation galaxy groups and clusters in the sdss redshift survey: identification, catalogs, and the multiplicity function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berlind, Andreas A.; Frieman, Joshua A.; Weinberg, David H.

    2006-01-01

    We identify galaxy groups and clusters in volume-limited samples of the SDSS redshift survey, using a redshift-space friends-of-friends algorithm. We optimize the friends-of-friends linking lengths to recover galaxy systems that occupy the same dark matter halos, using a set of mock catalogs created by populating halos of N-body simulations with galaxies. Extensive tests with these mock catalogs show that no combination of perpendicular and line-of-sight linking lengths is able to yield groups and clusters that simultaneously recover the true halo multiplicity function, projected size distribution, and velocity dispersion. We adopt a linking length combination that yields, for galaxy groups withmore » ten or more members: a group multiplicity function that is unbiased with respect to the true halo multiplicity function; an unbiased median relation between the multiplicities of groups and their associated halos; a spurious group fraction of less than {approx}1%; a halo completeness of more than {approx}97%; the correct projected size distribution as a function of multiplicity; and a velocity dispersion distribution that is {approx}20% too low at all multiplicities. These results hold over a range of mock catalogs that use different input recipes of populating halos with galaxies. We apply our group-finding algorithm to the SDSS data and obtain three group and cluster catalogs for three volume-limited samples that cover 3495.1 square degrees on the sky. We correct for incompleteness caused by fiber collisions and survey edges, and obtain measurements of the group multiplicity function, with errors calculated from realistic mock catalogs. These multiplicity function measurements provide a key constraint on the relation between galaxy populations and dark matter halos.« less

  1. SDSS-IV MaNGA: Galaxy Pair Fraction and Correlated Active Galactic Nuclei

    NASA Astrophysics Data System (ADS)

    Fu, Hai; Steffen, Joshua L.; Gross, Arran C.; Dai, Y. Sophia; Isbell, Jacob W.; Lin, Lihwai; Wake, David; Xue, Rui; Bizyaev, Dmitry; Pan, Kaike

    2018-04-01

    We have identified 105 galaxy pairs at z ∼ 0.04 with the MaNGA integral-field spectroscopic data. The pairs have projected separations between 1 and 30 kpc, and are selected to have radial velocity offsets less than 600 km s‑1 and stellar mass ratio between 0.1 and 1. The pair fraction increases with both the physical size of the integral-field unit and the stellar mass, consistent with theoretical expectations. We provide the best-fit analytical function of the pair fraction and find that ∼3% of M* galaxies are in close pairs. For both isolated galaxies and paired galaxies, active galactic nuclei (AGNs) are selected using emission-line ratios and Hα equivalent widths measured inside apertures at a fixed physical size. We find AGNs in ∼24% of the paired galaxies and binary AGNs in ∼13% of the pairs. To account for the selection biases in both the pair sample and the MaNGA sample, we compare the AGN comoving volume densities with those expected from the mass- and redshift-dependent AGN fractions. We find a strong (∼5×) excess of binary AGNs over random pairing and a mild (∼20%) deficit of single AGNs. The binary AGN excess increases from ∼2× to ∼6× as the projected separation decreases from 10–30 to 1–10 kpc. Our results indicate that the pairing of galaxies preserves the AGN duty cycle in individual galaxies but increases the population of binary AGNs through correlated activities. We suggest tidally induced galactic-scale shocks and AGN cross-ionization as two plausible channels to produce low-luminosity narrow-line-selected binary AGNs.

  2. Sequential sampling: a novel method in farm animal welfare assessment.

    PubMed

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.

  3. Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed

    NASA Astrophysics Data System (ADS)

    Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi

    2010-05-01

    To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.

  4. Mindfulness-based intervention for teenagers with cancer: study protocol for a randomized controlled trial.

    PubMed

    Malboeuf-Hurtubise, Catherine; Achille, Marie; Sultan, Serge; Vadnais, Majorie

    2013-05-10

    Individuals living with cancer must learn to face not only the physical symptoms of their condition, but also the anxiety and uncertainty related to the progression of the disease, the anticipation of physical and emotional pain related to illness and treatment, the significant changes implied in living with cancer, as well as the fear of recurrence after remission. Mindfulness-based meditation constitutes a promising option to alleviate these manifestations. This article presents the rationale and protocol development for a research project aimed at evaluating the effects of a mindfulness-based meditation intervention on quality of life, sleep, and mood in adolescents with cancer compared to a control group. A prospective, longitudinal, experimental design involving three time points (baseline, post-intervention, and follow-up) and two groups (experimental and control) was developed for this project. Participants will be assigned randomly to either group. Eligible participants are adolescents aged 11 to 18 years with a diagnosis of cancer, with no specific selection/exclusion based on type, stage, or trajectory of cancer. A final sample size of 28 participants is targeted. Adolescents in the experimental group will be completing the mindfulness meditation intervention, taught by two trained therapists. The intervention will comprise of eight weekly sessions, lasting 90 min each. Once the follow-up assessment is completed by the experimental group, wait-list controls will be offered to complete the mindfulness-based program. Intra-group analyses will serve to evaluate the impact of the mindfulness-based meditation intervention on quality of life, sleep, and mood pre-post intervention, as well as follow-up. Analyses will also be used to carry out inter-group comparisons between the experimental group and the wait-list controls. Voluntary participation, risk of attrition, and the small sample size are potential limitations of this project. In spite of possible limitations, this project will be one among very few aimed at improving quality of life, sleep, and mood in adolescents living with cancer, will evaluate the potential benefits of such a practice on both psychological and physical health of youth with cancer, and help in creating mindfulness-based intervention programs, in order to provide the necessary psychological help to adolescents living with cancer. NCT01783418.

  5. Mindfulness-based intervention for teenagers with cancer: study protocol for a randomized controlled trial

    PubMed Central

    2013-01-01

    Background Individuals living with cancer must learn to face not only the physical symptoms of their condition, but also the anxiety and uncertainty related to the progression of the disease, the anticipation of physical and emotional pain related to illness and treatment, the significant changes implied in living with cancer, as well as the fear of recurrence after remission. Mindfulness-based meditation constitutes a promising option to alleviate these manifestations. Methods/Design This article presents the rationale and protocol development for a research project aimed at evaluating the effects of a mindfulness-based meditation intervention on quality of life, sleep, and mood in adolescents with cancer compared to a control group. A prospective, longitudinal, experimental design involving three time points (baseline, post-intervention, and follow-up) and two groups (experimental and control) was developed for this project. Participants will be assigned randomly to either group. Eligible participants are adolescents aged 11 to 18 years with a diagnosis of cancer, with no specific selection/exclusion based on type, stage, or trajectory of cancer. A final sample size of 28 participants is targeted. Adolescents in the experimental group will be completing the mindfulness meditation intervention, taught by two trained therapists. The intervention will comprise of eight weekly sessions, lasting 90 min each. Once the follow-up assessment is completed by the experimental group, wait-list controls will be offered to complete the mindfulness-based program. Intra-group analyses will serve to evaluate the impact of the mindfulness-based meditation intervention on quality of life, sleep, and mood pre-post intervention, as well as follow-up. Analyses will also be used to carry out inter-group comparisons between the experimental group and the wait-list controls. Voluntary participation, risk of attrition, and the small sample size are potential limitations of this project. In spite of possible limitations, this project will be one among very few aimed at improving quality of life, sleep, and mood in adolescents living with cancer, will evaluate the potential benefits of such a practice on both psychological and physical health of youth with cancer, and help in creating mindfulness-based intervention programs, in order to provide the necessary psychological help to adolescents living with cancer. Trial registration Trial registration number: NCT01783418 PMID:23663534

  6. Improving the Representation of Soluble Iron in Climate Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez Garcia-Pando, Carlos

    2016-03-13

    Mineral dust produced in the arid and semi-arid regions of the world is the dominant source of iron (Fe) in atmospheric aerosol inputs to the open ocean. The bioavailable Fe fraction of atmospheric dust is thought to regulate and occasionally limit the primary productivity in large oceanic regions, which influences the CO2 uptake from the atmosphere affecting the Earth’s climate. Because Fe bioavailability cannot be directly measured, it is assumed that the dissolved Fe or highly reactive Fe in the dust is bioavailable. The fraction of soluble Fe in dust is mainly controlled by: (1) the mineral composition of themore » soils and the emitted dust from the source areas; (2) the atmospheric processing that converts the Fe in Fe-bearing minerals into highly soluble forms of Fe. The project has mainly focused on constraining the mineral composition of dust aerosols (1), a previously neglected, yet a key issue to constrain the deposition of soluble iron. Deriving aerosol mineral composition requires global knowledge of the soil mineral content, which is available from poorly constrained global atlases. In addition, the mineral content of the emitted aerosol differs from that of the parent soil. Measurements of soil mineral fractions are based upon wet sedimentation (or ’wet sieving’) techniques that disturb the soil sample, breaking aggregates that are found in the original, undispersed soil that is subject to wind erosion. Wet sieving alters the soil size distribution, replacing aggregates that are potentially mobilized as aerosols with a collection of smaller particles. A major challenge is to derive the size-distributed mineral fractions of the emitted dust based upon their fractions measured from wet-sieved soils. Finally, representations of dust mineral composition need to account for mixtures of minerals. Examination of individual particles shows that iron, an element that is central to many climate processes, is often found as trace impurities of iron oxide attached to aggregates of other minerals. This is another challenge that has been tackled by the project. The project has produced a major step forward on our understanding of the key processes needed to predict the mineral composition of dust aerosols by connecting theory, modeling and observations. The project has produced novel semi-empirical and theoretical methods to estimate the emitted size distribution and mineral composition of dust aerosols. These methods account for soil aggregates that are potentially emitted from the original undisturbed soil but are destroyed during wet sieving. The methods construct the emitted size distribution of individual minerals building upon brittle fragmentation theory, reconstructions of wet-sieved soil mineral size distributions, and/or characteristic mineral size distributions estimated from observations at times of high concentration. Based on an unprecedented evaluation with a new global compilation of observations produced with the project support, we showed that the new methods remedy some key deficiencies compared to the previous state-of-the-art. This includes the correct representation of Fe-bearing phyllosilicates at silt sizes, where they are abundant according to observations. In addition, the quartz fraction of silt particles is in better agreement with measured values. In addition, we represent an additional class of iron oxide aerosol that is a small impurity embedded within other minerals, allowing it to travel farther than in its pure crystalline state. We assume that these impurities are least frequent in soils rich in iron oxides (as a result of the assumed effect of weathering that creates pure iron oxide crystals). The mineral composition of dust is also important to other interaction with climate - through shortwave absorption and radiative forcing, nucleation of cloud droplets and ice crystals, and the heterogeneous formation of sulfates and nitrates - and to its impacts upon human health. Despite the importance of mineral composition, models have typically assumed that soil dust aerosols have globally uniform composition. The results of this project will allow an improved estimation of the dust effects upon climate and health.« less

  7. Sample size and power calculations for detecting changes in malaria transmission using antibody seroconversion rate.

    PubMed

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris

    2015-12-30

    Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.

  8. Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology

    PubMed Central

    Vavrek, Matthew J.

    2015-01-01

    Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770

  9. The distribution of galaxies within the 'Great Wall'

    NASA Technical Reports Server (NTRS)

    Ramella, Massimo; Geller, Margaret J.; Huchra, John P.

    1992-01-01

    The galaxy distribution within the 'Great Wall', the most striking feature in the first three 'slices' of the CfA redshift survey extension is examined. The Great Wall is extracted from the sample and is analyzed by counting galaxies in cells. The 'local' two-point correlation function within the Great Wall is computed and the local correlation length, is estimated 15/h Mpc, about 3 times larger than the correlation length for the entire sample. The redshift distribution of galaxies in the pencil-beam survey by Broadhurst et al. (1990) shows peaks separated about by large 'voids', at least to a redshift of about 0.3. The peaks might represent the intersections of their about 5/h Mpc pencil beams with structures similar to the Great Wall. Under this hypothesis, sampling of the Great Walls shows that l approximately 12/h Mpc is the minimum projected beam size required to detect all the 'walls' at redshifts between the peak of the selection function and the effective depth of the survey.

  10. SAMPL4 & DOCK3.7: lessons for automated docking procedures

    NASA Astrophysics Data System (ADS)

    Coleman, Ryan G.; Sterling, Teague; Weiss, Dahlia R.

    2014-03-01

    The SAMPL4 challenges were used to test current automated methods for solvation energy, virtual screening, pose and affinity prediction of the molecular docking pipeline DOCK 3.7. Additionally, first-order models of binding affinity were proposed as milestones for any method predicting binding affinity. Several important discoveries about the molecular docking software were made during the challenge: (1) Solvation energies of ligands were five-fold worse than any other method used in SAMPL4, including methods that were similarly fast, (2) HIV Integrase is a challenging target, but automated docking on the correct allosteric site performed well in terms of virtual screening and pose prediction (compared to other methods) but affinity prediction, as expected, was very poor, (3) Molecular docking grid sizes can be very important, serious errors were discovered with default settings that have been adjusted for all future work. Overall, lessons from SAMPL4 suggest many changes to molecular docking tools, not just DOCK 3.7, that could improve the state of the art. Future difficulties and projects will be discussed.

  11. Three dimensional reconstruction of InGaN nanodisks in GaN nanowires: Improvement of the nanowire sample preparation to avoid missing wedge effects

    NASA Astrophysics Data System (ADS)

    Gries, Katharina Ines; Schlechtweg, Julian; Hille, Pascal; Schörmann, Jörg; Eickhoff, Martin; Volz, Kerstin

    2017-10-01

    Scanning transmission electron microscopy is an extremely useful method to image small features with a size in the range of a few nanometers and below. But it must be taken into account that such images are projections of the sample and do not necessarily represent the real three dimensional structure of the specimen. By applying electron tomography this problem can be overcome. In our work GaN nanowires including InGaN nanodisks were investigated. To reduce the effect of the missing wedge a single nanowire was removed from the underlying silicon substrate using a manipulator needle and attached to a tomography holder. Since this sample exhibits the same thickness of few tens of nanometers in all directions normal to the tilt axis, this procedure allows a sample tilt of ±90°. Reconstruction of the received data reveals a split of the InGaN nanodisks into a horizontal continuation of the (0 0 0 1 bar) central facet and a declined {1 0 1 bar l} facet (with l = -2 or -3).

  12. Qualification of heavy water based irradiation device in the JSI TRIGA reactor for irradiations of FT-TIMS samples for nuclear safeguards

    NASA Astrophysics Data System (ADS)

    Radulović, Vladimir; Kolšek, Aljaž; Fauré, Anne-Laure; Pottin, Anne-Claire; Pointurier, Fabien; Snoj, Luka

    2018-03-01

    The Fission Track Thermal Ionization Mass Spectrometry (FT-TIMS) method is considered as the reference method for particle analysis in the field of nuclear Safeguards for measurements of isotopic compositions (fissile material enrichment levels) in micrometer-sized uranium particles collected in nuclear facilities. An integral phase in the method is the irradiation of samples in a very well thermalized neutron spectrum. A bilateral collaboration project was carried out between the Jožef Stefan Institute (JSI, Slovenia) and the Commissariat à l'Énergie Atomique et aux Énergies Alternatives (CEA, France) to determine whether the JSI TRIGA reactor could be used for irradiations of samples for the FT-TIMS method. This paper describes Monte Carlo simulations, experimental activation measurements and test irradiations performed in the JSI TRIGA reactor, firstly to determine the feasibility, and secondly to design and qualify a purpose-built heavy water based irradiation device for FT-TIMS samples. The final device design has been shown experimentally to meet all the required performance specifications.

  13. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    PubMed

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.

  14. Biostatistics Series Module 5: Determining Sample Size

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437

  15. Sample size and power for cost-effectiveness analysis (part 1).

    PubMed

    Glick, Henry A

    2011-03-01

    Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.

  16. Estimation of sample size and testing power (Part 4).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  17. A sequential bioequivalence design with a potential ethical advantage.

    PubMed

    Fuglsang, Anders

    2014-07-01

    This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.

  18. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  19. The Assessment of Distortion in Neurosurgical Image Overlay Projection.

    PubMed

    Vakharia, Nilesh N; Paraskevopoulos, Dimitris; Lang, Jozsef; Vakharia, Vejay N

    2016-02-01

    Numerous studies have demonstrated the superiority of neuronavigation during neurosurgical procedures compared to non-neuronavigation-based procedures. Limitations to neuronavigation systems include the need for the surgeons to avert their gaze from the surgical field and the cost of the systems, especially for hospitals in developing countries. Overlay projection of imaging directly onto the patient allows localization of intracranial structures. A previous study using overlay projection demonstrated the accuracy of image coregistration for a lesion in the temporal region but did not assess image distortion when projecting onto other anatomical locations. Our aim is to quantify this distortion and establish which regions of the skull would be most suitable for overlay projection. Using the difference in size of a square grid when projected onto an anatomically accurate model skull and a flat surface, from the same distance, we were able to calculate the degree of image distortion when projecting onto the skull from the anterior, posterior, superior, and lateral aspects. Measuring the size of a square when projected onto a flat surface from different distances allowed us to model change in lesion size when projecting a deep structure onto the skull surface. Using 2 mm as the upper limit for distortion, our results show that images can be accurately projected onto the majority (81.4%) of the surface of the skull. Our results support the use of image overlay projection in regions with ≤2 mm distortion to assist with localization of intracranial lesions at a fraction of the cost of existing methods. © The Author(s) 2015.

  20. Comprehensive Truck Size and Weight (TS&W) Study. Phase 1-synthesis, working paper 12 : energy conservation and truck size and weight regulations

    DOT National Transportation Integrated Search

    2013-01-01

    This project consisted of the development of a revision of the SAE J2735 Dedicated Short Range Communications (DSRC) Message Set Dictionary, published 2009-11-19. This revision will be submitted, at the end of this project to the Society of Automotiv...

  1. Geology of Potential Landing Sites for Martian Sample Returns

    NASA Technical Reports Server (NTRS)

    Greeley, Ronald

    2003-01-01

    This project involved the analysis of potential landing sites on Mars. As originally proposed, the project focused on landing sites from which samples might be returned to Earth. However, as the project proceeded, the emphasis shifted to missions that would not include sample return, because the Mars Exploration Program had deferred sample returns to the next decade. Subsequently, this project focused on the study of potential landing sites for the Mars Exploration Rovers.

  2. Hanford analytical sample projections FY 1998--FY 2002

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joyce, S.M.

    1998-02-12

    Analytical Services projections are compiled for the Hanford site based on inputs from the major programs for the years 1998 through 2002. Projections are categorized by radiation level, protocol, sample matrix and program. Analyses requirements are also presented. This document summarizes the Hanford sample projections for fiscal years 1998 to 2002. Sample projections are based on inputs submitted to Analytical Services covering Environmental Restoration, Tank Waste Remediation Systems (TWRS), Solid Waste, Liquid Effluents, Spent Nuclear Fuels, Transition Projects, Site Monitoring, Industrial Hygiene, Analytical Services and miscellaneous Hanford support activities. In addition, details on laboratory scale technology (development) work, Sample Management,more » and Data Management activities are included. This information will be used by Hanford Analytical Services (HAS) and the Sample Management Working Group (SMWG) to assure that laboratories and resources are available and effectively utilized to meet these documented needs.« less

  3. Effective Discharge and Annual Sediment Yield on Brazos River

    NASA Astrophysics Data System (ADS)

    Rouhnia, M.; Salehi, M.; Keyvani, A.; Ma, F.; Strom, K. B.; Raphelt, N.

    2012-12-01

    Geometry of an alluvial river alters dynamically over the time due to the sediment mobilization on the banks and bottom of the river channel in various flow rates. Many researchers tried to define a single representative discharge for these morphological processes such as "bank-full discharge", "effective discharge" and "channel forming discharge". Effective discharge is the flow rate in which, the most sediment load is being carried by water, in a long term period. This project is aimed to develop effective discharge estimates for six gaging stations along the Brazos River from Waco, TX to Rosharon, TX. The project was performed with cooperation of the In-stream Flow Team of the Texas Water Development Board (TWDB). Project objectives are listed as: 1) developing "Flow Duration Curves" for six stations based on mean-daily discharge by downloading the required, additional data from U.S Geological Survey website, 2) developing "Rating Curves" for six gaging stations after sampling and field measurements in three different flow conditions, 3) developing a smooth shaped "Sediment Yield Histogram" with a well distinguished peak as effective discharge. The effective discharge was calculated using two methods of manually and automatic bin selection. The automatic method is based on kernel density approximation. Cross-sectional geometry measurements, particle size distributions and water field samples were processed in the laboratory to obtain the suspended sediment concentration associated with flow rate. Rating curves showed acceptable trends, as the greater flow rate we experienced, the more sediment were carried by water.

  4. The cost of large numbers of hypothesis tests on power, effect size and sample size.

    PubMed

    Lazzeroni, L C; Ray, A

    2012-01-01

    Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.

  5. Assessment of microphysical and chemical factors of aerosols over seas of the Russian Artic Eastern Section

    NASA Astrophysics Data System (ADS)

    Golobokova, Liudmila; Polkin, Victor

    2014-05-01

    The newly observed kickoff of the Northern Route development drew serious attention to state of the Arctic Resource environment. Occurring climatic and environmental changes are more sensitively seen in polar areas in particular. Air environment control allows for making prognostic assessments which are required for planning hazardous environmental impacts preventive actions. In August - September 2013, RV «Professor Khlustin» Northern Sea Route expeditionary voyage took place. En-route aerosol sampling was done over the surface of the Beringov, Chukotka and Eastern-Siberia seas (till the town of Pevek). The purpose of sampling was to assess spatio-temporal variability of optic, microphysical and chemical characteristics of aerosol particles of the surface layer within different areas adjacent to the Northern Sea Route. Aerosol test made use of automated mobile unit consisting of photoelectric particles counter AZ-10, aetalometr MDA-02, aspirator on NBM-1.2 pump chassis, and the impactor. This set of equipment allows for doing measurements of number concentration, dispersed composition of aerosols within sizes d=0.3-10 mkm, mass concentration of submicron sized aerosol, and filter-conveyed aerosols sampling. Filter-conveyed aerosols sampling was done using method accepted by EMEP and EANET monitoring networks. The impactor channel was upgraded to separate particles bigger than 1 mkm in size, and the fine grain fraction settled down on it. Reverse 5-day and 10-day trajectories of air mass transfer executed at heights of 10, 1500 and 3500 m were analyzed. The heights were selected by considerations that 3000 m is the height which characterizes air mass trend in the lower troposphere. 1500 m is the upper border of the atmospheric boundary layer, and the sampling was done in the Earth's surface layer at less than 10 m. Minimum values of the bespoken microphysical characteristics are better characteristic of higher latitudes where there are no man induced sources of aerosols while the natural ones are of lower severity due to low temperatures endemic for the Arctic Ocean areas. For doing the assessment of the air mass components chemical formulation samples of water soluble fraction of the atmospheric aerosol underwent chemical analysis. Sum of main ions within the aerosol composition varied from 0.23 to 16.2 mkg/m3. Minimum ion concentrations are defined in the aerosol sampled over the Chukotka sea surface at still. Chemical composition of the Beringov and Chukotka sea aerosol was dominated by impurities of sea origin coming from the ocean with air mass. Ion sum increased concentrations were observed in the Pevek area (Eastern Siberia Sea). Aerosol chemical composition building was impacted by air mass coming from the shore. Maximum concentrations of the bespoken components are seen in the aerosol sampled during stormy weather. Increase of wind made it for raising into the air of the sea origin particles. Ingestion of sprays onto the filter was eliminated by covering the sample catcher with a special protective hood. This completed survey is indicative of favourable state of atmosphere in the arctic resource of the Russian Arctic Eastern Section during Summer-Autumn season of 2013. The job is done under financial support of project. 23 Programs of fundamental research of the RAS Presidium, Partnership Integration Project, SB RAS. 25.

  6. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    PubMed

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  7. Estimation and control of droplet size and frequency in projected spray mode of a gas metal arc welding (GMAW) process.

    PubMed

    Anzehaee, Mohammad Mousavi; Haeri, Mohammad

    2011-07-01

    New estimators are designed based on the modified force balance model to estimate the detaching droplet size, detached droplet size, and mean value of droplet detachment frequency in a gas metal arc welding process. The proper droplet size for the process to be in the projected spray transfer mode is determined based on the modified force balance model and the designed estimators. Finally, the droplet size and the melting rate are controlled using two proportional-integral (PI) controllers to achieve high weld quality by retaining the transfer mode and generating appropriate signals as inputs of the weld geometry control loop. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  9. Evaluation of OMNIgene®•SPUTUM-stabilised sputum for long-term transport and Xpert® MTB/RIF testing in Nepal.

    PubMed

    Maharjan, B; Kelly-Cirino, C D; Weirich, A; Curry, P S; Hoffman, H; Avsar, K; Shrestha, B

    2016-12-01

    German Nepal TB Project, National Tuberculosis Reference Laboratory, Kathmandu, Nepal. To evaluate whether transporting samples in OMNIgene®•SPUTUM (OM-S) reagent from a peripheral collection site to a central laboratory in Nepal can improve tuberculosis (TB) detection and increase the sensitivity of Xpert® MTB/RIF testing. One hundred sputum samples were split manually. Each portion was assigned to the OM-S group (OM-S added at collection, airline-couriered without cold chain, no other processing required) or the standard-of-care (SOC) group (samples airline-couriered on ice, sodium hydroxide + N-acetyl-L-cysteine processing required at the laboratory). Smear microscopy and Xpert testing were performed. Transport time was 2-13 days. Overall smear results were comparable (respectively 58% and 56% smear-negative results in the OM-S and SOC groups). The rate of smear-positive, Mycobacterium tuberculosis-positive (MTB+) sample detection was identical for both treatment groups, at 95%. More smear-negative MTB+ samples were detected in the OM-S group (17% vs. 13%, P = 0.0655). Sputum samples treated with OM-S can undergo multiday ambient-temperature transport and yield comparable smear and Xpert results to those of SOC samples. Further investigation with larger sample sizes is required to assess whether treating sputum samples with OM-S could increase the sensitivity of Xpert testing in smear-negative samples.

  10. Effects of Sample Selection Bias on the Accuracy of Population Structure and Ancestry Inference

    PubMed Central

    Shringarpure, Suyash; Xing, Eric P.

    2014-01-01

    Population stratification is an important task in genetic analyses. It provides information about the ancestry of individuals and can be an important confounder in genome-wide association studies. Public genotyping projects have made a large number of datasets available for study. However, practical constraints dictate that of a geographical/ethnic population, only a small number of individuals are genotyped. The resulting data are a sample from the entire population. If the distribution of sample sizes is not representative of the populations being sampled, the accuracy of population stratification analyses of the data could be affected. We attempt to understand the effect of biased sampling on the accuracy of population structure analysis and individual ancestry recovery. We examined two commonly used methods for analyses of such datasets, ADMIXTURE and EIGENSOFT, and found that the accuracy of recovery of population structure is affected to a large extent by the sample used for analysis and how representative it is of the underlying populations. Using simulated data and real genotype data from cattle, we show that sample selection bias can affect the results of population structure analyses. We develop a mathematical framework for sample selection bias in models for population structure and also proposed a correction for sample selection bias using auxiliary information about the sample. We demonstrate that such a correction is effective in practice using simulated and real data. PMID:24637351

  11. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  12. Development of optics with micro-LED arrays for improved opto-electronic neural stimulation

    NASA Astrophysics Data System (ADS)

    Chaudet, Lionel; Neil, Mark; Degenaar, Patrick; Mehran, Kamyar; Berlinguer-Palmini, Rolando; Corbet, Brian; Maaskant, Pleun; Rogerson, David; Lanigan, Peter; Bamberg, Ernst; Roska, Botond

    2013-03-01

    The breakthrough discovery of a nanoscale optically gated ion channel protein, Channelrhodopsin 2 (ChR2), and its combination with a genetically expressed ion pump, Halorhodopsin, allowed the direct stimulation and inhibition of individual action potentials with light alone. This work reports developments of ultra-bright elec­ tronically controlled optical array sources with enhanced light gated ion channels and pumps for use in systems to further our understanding of both brain and visual function. This work is undertaken as part of the European project, OptoNeuro. Micro-LED arrays permit spatio-temporal control of neuron stimulation on sub-millisecond timescales. However they are disadvantaged by their broad spatial light emission distribution and low fill factor. We present the design and implementation of a projection and micro-optics system for use with a micro-LED array consisting of a 16x16 matrix of 25 μm diameter micro-LEDs with 150 μm centre-to-centre spacing and an emission spectrum centred at 470 nm overlapping the peak sensitivity of ChR2 and its testing on biological samples. The projection system images the micro-LED array onto micro-optics to improve the fill-factor from ~2% to more than 78% by capturing a larger fraction of the LED emission and directing it correctly to the sample plane. This approach allows low fill factor arrays to be used effectively, which in turn has benefits in terms of thermal management and electrical drive from CMOS backplane electronics. The entire projection system is integrated into a microscope prototype to provide stimulation spots at the same size as the neuron cell body (μ10 pm).

  13. Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem*

    PubMed Central

    Katsevich, E.; Katsevich, A.; Singer, A.

    2015-01-01

    In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly oriented copies of a molecule. The problem of single particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy two-dimensional projection images taken at unknown directions to reconstruct the three-dimensional (3D) structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in SPR. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously suggested that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is challenging, since only projections of the molecules are observed, but not the molecules themselves. In this paper, we formulate a general problem of covariance estimation from noisy projections of samples. This problem has intimate connections with matrix completion problems and high-dimensional principal component analysis. We propose an estimator and prove its consistency. When there are finitely many heterogeneity classes, the spectrum of the estimated covariance matrix reveals the number of classes. The estimator can be found as the solution to a certain linear system. In the cryo-EM case, the linear operator to be inverted, which we term the projection covariance transform, is an important object in covariance estimation for tomographic problems involving structural variation. Inverting it involves applying a filter akin to the ramp filter in tomography. We design a basis in which this linear operator is sparse and thus can be tractably inverted despite its large size. We demonstrate via numerical experiments on synthetic datasets the robustness of our algorithm to high levels of noise. PMID:25699132

  14. The NASA Bed Rest Project

    NASA Technical Reports Server (NTRS)

    Rhodes, Bradley; Meck, Janice

    2005-01-01

    NASA s National Vision for Space Exploration includes human travel beyond low earth orbit and the ultimate safe return of the crews. Crucial to fulfilling the vision is the successful and timely development of countermeasures for the adverse physiological effects on human systems caused by long term exposure to the microgravity environment. Limited access to in-flight resources for the foreseeable future increases NASA s reliance on ground-based analogs to simulate these effects of microgravity. The primary analog for human based research will be head-down bed rest. By this approach NASA will be able to evaluate countermeasures in large sample sizes, perform preliminary evaluations of proposed in-flight protocols and assess the utility of individual or combined strategies before flight resources are requested. In response to this critical need, NASA has created the Bed Rest Project at the Johnson Space Center. The Project establishes the infrastructure and processes to provide a long term capability for standardized domestic bed rest studies and countermeasure development. The Bed Rest Project design takes a comprehensive, interdisciplinary, integrated approach that reduces the resource overhead of one investigator for one campaign. In addition to integrating studies operationally relevant for exploration, the Project addresses other new Vision objectives, namely: 1) interagency cooperation with the NIH allows for Clinical Research Center (CRC) facility sharing to the benefit of both agencies, 2) collaboration with our International Partners expands countermeasure development opportunities for foreign and domestic investigators as well as promotes consistency in approach and results, 3) to the greatest degree possible, the Project also advances research by clinicians and academia alike to encourage return to earth benefits. This paper will describe the Project s top level goals, organization and relationship to other Exploration Vision Projects, implementation strategy, address Project deliverables, schedules and provide a status of bed rest campaigns presently underway.

  15. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  16. Influence of fuel injection timing and pressure on in-flame soot particles in an automotive-size diesel engine.

    PubMed

    Zhang, Renlin; Kook, Sanghoon

    2014-07-15

    The current understanding of soot particle morphology in diesel engines and their dependency on the fuel injection timing and pressure is limited to those sampled from the exhaust. In this study, a thermophoretic sampling and subsequent transmission electron microscope imaging were applied to the in-flame soot particles inside the cylinder of a working diesel engine for various fuel injection timings and pressures. The results show that the number count of soot particles per image decreases by more than 80% when the injection timing is retarded from -12 to -2 crank angle degrees after the top dead center. The late injection also results in over 90% reduction of the projection area of soot particles on the TEM image and the size of soot aggregates also become smaller. The primary particle size, however, is found to be insensitive to the variations in fuel injection timing. For injection pressure variations, both the size of primary particles and soot aggregates are found to decrease with increasing injection pressure, demonstrating the benefits of high injection velocity and momentum. Detailed analysis shows that the number count of soot particles per image increases with increasing injection pressure up to 130 MPa, primarily due to the increased small particle aggregates that are less than 40 nm in the radius of gyration. The fractal dimension shows an overall decrease with the increasing injection pressure. However, there is a case that the fractal dimension shows an unexpected increase between 100 and 130 MPa injection pressure. It is because the small aggregates with more compact and agglomerated structures outnumber the large aggregates with more stretched chain-like structures.

  17. High-resolution Imaging of PHIBSS z ˜ 2 Main-sequence Galaxies in CO J = 1 → 0

    NASA Astrophysics Data System (ADS)

    Bolatto, A. D.; Warren, S. R.; Leroy, A. K.; Tacconi, L. J.; Bouché, N.; Förster Schreiber, N. M.; Genzel, R.; Cooper, M. C.; Fisher, D. B.; Combes, F.; García-Burillo, S.; Burkert, A.; Bournaud, F.; Weiss, A.; Saintonge, A.; Wuyts, S.; Sternberg, A.

    2015-08-01

    We present Karl Jansky Very Large Array observations of the CO J=1-0 transition in a sample of four z˜ 2 main-sequence galaxies. These galaxies are in the blue sequence of star-forming galaxies at their redshift, and are part of the IRAM Plateau de Bure HIgh-z Blue Sequence Survey which imaged them in CO J=3-2. Two galaxies are imaged here at high signal-to-noise, allowing determinations of their disk sizes, line profiles, molecular surface densities, and excitation. Using these and published measurements, we show that the CO and optical disks have similar sizes in main-sequence galaxies, and in the galaxy where we can compare CO J=1-0 and J=3-2 sizes we find these are also very similar. Assuming a Galactic CO-to-H2 conversion, we measure surface densities of {{{Σ }}}{mol}˜ 1200 {M}⊙ pc-2 in projection and estimate {{{Σ }}}{mol}˜ 500-900 {M}⊙ pc-2 deprojected. Finally, our data yields velocity-integrated Rayleigh-Jeans brightness temperature line ratios r31 that are approximately at unity. In addition to the similar disk sizes, the very similar line profiles in J=1-0 and J=3-2 indicate that both transitions sample the same kinematics, implying that their emission is coextensive. We conclude that in these two main-sequence galaxies there is no evidence for significant excitation gradients or a large molecular reservoir that is diffuse or cold and not involved in active star formation. We suggest that r31 in very actively star-forming galaxies is likely an indicator of how well-mixed the star formation activity and the molecular reservoir are.

  18. SDSS-IV MaNGA: global stellar population and gradients for about 2000 early-type and spiral galaxies on the mass-size plane

    NASA Astrophysics Data System (ADS)

    Li, Hongyu; Mao, Shude; Cappellari, Michele; Ge, Junqiang; Long, R. J.; Li, Ran; Mo, H. J.; Li, Cheng; Zheng, Zheng; Bundy, Kevin; Thomas, Daniel; Brownstein, Joel R.; Roman Lopes, Alexandre; Law, David R.; Drory, Niv

    2018-05-01

    We perform full spectrum fitting stellar population analysis and Jeans Anisotropic modelling of the stellar kinematics for about 2000 early-type galaxies (ETGs) and spiral galaxies from the MaNGA DR14 sample. Galaxies with different morphologies are found to be located on a remarkably tight mass plane which is close to the prediction of the virial theorem, extending previous results for ETGs. By examining an inclined projection (`the mass-size' plane), we find that spiral and early-type galaxies occupy different regions on the plane, and their stellar population properties (i.e. age, metallicity, and stellar mass-to-light ratio) vary systematically along roughly the direction of velocity dispersion, which is a proxy for the bulge fraction. Galaxies with higher velocity dispersions have typically older ages, larger stellar mass-to-light ratios and are more metal rich, which indicates that galaxies increase their bulge fractions as their stellar populations age and become enriched chemically. The age and stellar mass-to-light ratio gradients for low-mass galaxies in our sample tend to be positive (centre < outer), while the gradients for most massive galaxies are negative. The metallicity gradients show a clear peak around velocity dispersion log10 σe ≈ 2.0, which corresponds to the critical mass ˜3 × 1010 M⊙ of the break in the mass-size relation. Spiral galaxies with large mass and size have the steepest gradients, while the most massive ETGs, especially above the critical mass Mcrit ≳ 2 × 1011 M⊙, where slow rotator ETGs start dominating, have much flatter gradients. This may be due to differences in their evolution histories, e.g. mergers.

  19. Visual Sample Plan Version 7.0 User's Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzke, Brett D.; Newburn, Lisa LN; Hathaway, John E.

    2014-03-01

    User's guide for VSP 7.0 This user's guide describes Visual Sample Plan (VSP) Version 7.0 and provides instructions for using the software. VSP selects the appropriate number and location of environmental samples to ensure that the results of statistical tests performed to provide input to risk decisions have the required confidence and performance. VSP Version 7.0 provides sample-size equations or algorithms needed by specific statistical tests appropriate for specific environmental sampling objectives. It also provides data quality assessment and statistical analysis functions to support evaluation of the data and determine whether the data support decisions regarding sites suspected of contamination.more » The easy-to-use program is highly visual and graphic. VSP runs on personal computers with Microsoft Windows operating systems (XP, Vista, Windows 7, and Windows 8). Designed primarily for project managers and users without expertise in statistics, VSP is applicable to two- and three-dimensional populations to be sampled (e.g., rooms and buildings, surface soil, a defined layer of subsurface soil, water bodies, and other similar applications) for studies of environmental quality. VSP is also applicable for designing sampling plans for assessing chem/rad/bio threat and hazard identification within rooms and buildings, and for designing geophysical surveys for unexploded ordnance (UXO) identification.« less

  20. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  1. Particle mobility size spectrometers: harmonization of technical standards and data structure to facilitate high quality long-term observations of atmospheric particle number size distributions

    NASA Astrophysics Data System (ADS)

    Wiedensohler, A.; Birmili, W.; Nowak, A.; Sonntag, A.; Weinhold, K.; Merkel, M.; Wehner, B.; Tuch, T.; Pfeifer, S.; Fiebig, M.; Fjäraa, A. M.; Asmi, E.; Sellegri, K.; Depuy, R.; Venzac, H.; Villani, P.; Laj, P.; Aalto, P.; Ogren, J. A.; Swietlicki, E.; Roldin, P.; Williams, P.; Quincey, P.; Hüglin, C.; Fierz-Schmidhauser, R.; Gysel, M.; Weingartner, E.; Riccobono, F.; Santos, S.; Grüning, C.; Faloon, K.; Beddows, D.; Harrison, R. M.; Monahan, C.; Jennings, S. G.; O'Dowd, C. D.; Marinoni, A.; Horn, H.-G.; Keck, L.; Jiang, J.; Scheckman, J.; McMurry, P. H.; Deng, Z.; Zhao, C. S.; Moerman, M.; Henzing, B.; de Leeuw, G.

    2010-12-01

    Particle mobility size spectrometers often referred to as DMPS (Differential Mobility Particle Sizers) or SMPS (Scanning Mobility Particle Sizers) have found a wide application in atmospheric aerosol research. However, comparability of measurements conducted world-wide is hampered by lack of generally accepted technical standards with respect to the instrumental set-up, measurement mode, data evaluation as well as quality control. This article results from several instrument intercomparison workshops conducted within the European infrastructure project EUSAAR (European Supersites for Atmospheric Aerosol Research). Under controlled laboratory conditions, the number size distribution from 20 to 200 nm determined by mobility size spectrometers of different design are within an uncertainty range of ±10% after correcting internal particle losses, while below and above this size range the discrepancies increased. Instruments with identical design agreed within ±3% in the peak number concentration when all settings were done carefully. Technical standards were developed for a minimum requirement of mobility size spectrometry for atmospheric aerosol measurements. Technical recommendations are given for atmospheric measurements including continuous monitoring of flow rates, temperature, pressure, and relative humidity for the sheath and sample air in the differential mobility analyser. In cooperation with EMEP (European Monitoring and Evaluation Program), a new uniform data structure was introduced for saving and disseminating the data within EMEP. This structure contains three levels: raw data, processed data, and final particle size distributions. Importantly, we recommend reporting raw measurements including all relevant instrument parameters as well as a complete documentation on all data transformation and correction steps. These technical and data structure standards aim to enhance the quality of long-term size distribution measurements, their comparability between different networks and sites, and their transparency and traceability back to raw data.

  2. The use of reconsent in a national evaluation of adolescent reproductive health programs.

    PubMed

    Palen, Lori-Ann; Ashley, Olivia Silber; Jones, Sarah B; Lyons, Jeffrey D; Derecho, Azucena A; Kan, Marni L; Richmond Scott, Alicia

    2012-08-01

    Reconsent involves asking research participants to reaffirm their consent for study participation when there have been significant changes in the study's procedures, risks, or benefits. We described the reconsent process, identified the reconsent rate, and examined the comparability of youths enrolled via consent and reconsent in a national evaluation of adolescent reproductive health programs. Evaluation participants from five abstinence education projects (N = 2,176) and nine projects serving pregnant or parenting adolescents (N = 878) provided either parent or youth consent or reconsent to participate in the national evaluation. Participants completed surveys that included demographic characteristics; sexual intentions, norms and behaviors; and pregnancy history. Multivariate logistic regression was used to examine associations between consent status, demographic characteristics, and risk indicators. The reconsent rates in the abstinence education and pregnant or parenting samples were 45% and 58%, respectively. Participant's age was positively associated with reconsent. Hispanic adolescents (and, for abstinence education, other racial/ethnic minorities) were underrepresented among youth with reconsent. Among abstinence education study participants, risk indicators were not associated with consent status. Among pregnant or parenting teens, those who had experienced repeat pregnancy were less likely than those who had experienced only one pregnancy to have been enrolled via reconsent. Reconsent can bolster sample size but may introduce bias by missing some racial/ethnic and age-groups. Among high-risk adolescents, reconsent may also yield a sample that differs from consented samples on risk characteristics, necessitating statistical adjustments when analyzing data. Copyright © 2012 Society for Adolescent Health and Medicine. All rights reserved.

  3. The AMIGA sample of isolated galaxies. IV. A catalogue of neighbours around isolated galaxies

    NASA Astrophysics Data System (ADS)

    Verley, S.; Odewahn, S. C.; Verdes-Montenegro, L.; Leon, S.; Combes, F.; Sulentic, J.; Bergond, G.; Espada, D.; García, E.; Lisenfeld, U.; Sabater, J.

    2007-08-01

    Context: Studies of the effects of environment on galaxy properties and evolution require well defined control samples. Such isolated galaxy samples have up to now been small or poorly defined. The AMIGA project (Analysis of the interstellar Medium of Isolated GAlaxies) represents an attempt to define a statistically useful sample of the most isolated galaxies in the local (z ≤ 0.05) Universe. Aims: A suitable large sample for the AMIGA project already exists, the Catalogue of Isolated Galaxies (CIG, Karachentseva, 1973, Astrofizicheskie Issledovaniia Izvestiya Spetsial'noj Astrofizicheskoj Observatorii, 8, 3; 1050 galaxies), and we use this sample as a starting point to refine and perform a better quantification of its isolation properties. Methods: Digitised POSS-I E images were analysed out to a minimum projected radius R ≥ 0.5 Mpc around 950 CIG galaxies (those within Vr = 1500 km s-1 were excluded). We identified all galaxy candidates in each field brighter than B = 17.5 with a high degree of confidence using the LMORPHO software. We generated a catalogue of approximately 54 000 potential neighbours (redshifts exist for ≈30% of this sample). Results: Six hundred sixty-six galaxies pass and two hundred eighty-four fail the original CIG isolation criterion. The available redshift data confirm that our catalogue involves a largely background population rather than physically associated neighbours. We find that the exclusion of neighbours within a factor of four in size around each CIG galaxy, employed in the original isolation criterion, corresponds to Δ Vr ≈ 18 000 km s-1 indicating that it was a conservative limit. Conclusions: Galaxies in the CIG have been found to show different degrees of isolation. We conclude that a quantitative measure of this is mandatory. It will be the subject of future work based on the catalogue of neighbours obtained here. Full Table [see full text] is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/470/505 and from http://www.iaa.es/AMIGA.html. Figure 4 is only available in electronic form at http://www.aanda.org

  4. An Experimental Study of Upward Burning Over Long Solid Fuels: Facility Development and Comparison

    NASA Technical Reports Server (NTRS)

    Kleinhenz, Julie; Yuan, Zeng-Guang

    2011-01-01

    As NASA's mission evolves, new spacecraft and habitat environments necessitate expanded study of materials flammability. Most of the upward burning tests to date, including the NASA standard material screening method NASA-STD-6001, have been conducted in small chambers where the flame often terminates before a steady state flame is established. In real environments, the same limitations may not be present. The use of long fuel samples would allow the flames to proceed in an unhindered manner. In order to explore sample size and chamber size effects, two large chambers were developed at NASA GRC under the Flame Prevention, Detection and Suppression (FPDS) project. The first was an existing vacuum facility, VF-13, located at NASA John Glenn Research Center. This 6350 liter chamber could accommodate fuels sample lengths up to 2 m. However, operational costs and restricted accessibility limited the test program, so a second laboratory scale facility was developed in parallel. By stacking additional two chambers on top of an existing combustion chamber facility, this 81 liter Stacked-chamber facility could accommodate a 1.5 m sample length. The larger volume, more ideal environment of VF-13 was used to obtain baseline data for comparison with the stacked chamber facility. In this way, the stacked chamber facility was intended for long term testing, with VF-13 as the proving ground. Four different solid fuels (adding machine paper, poster paper, PMMA plates, and Nomex fabric) were tested with fuel sample lengths up to 2 m. For thin samples (papers) with widths up to 5 cm, the flame reached a steady state length, which demonstrates that flame length may be stabilized even when the edge effects are reduced. For the thick PMMA plates, flames reached lengths up to 70 cm but were highly energetic and restricted by oxygen depletion. Tests with the Nomex fabric confirmed that the cyclic flame phenomena, observed in small facility tests, continued over longer sample. New features were also observed at the higher oxygen/pressure conditions available in the large chamber. Comparison of flame behavior between the two facilities under identical conditions revealed disparities, both qualitative and quantitative. This suggests that, in certain ranges of controlling parameters, chamber size and shape could be one of the parameters that affect the material flammability. If this proves to be true, it may limit the applicability of existing flammability data.

  5. Effective population size of korean populations.

    PubMed

    Park, Leeyoung

    2014-12-01

    Recently, new methods have been developed for estimating the current and recent changes in effective population sizes. Based on the methods, the effective population sizes of Korean populations were estimated using data from the Korean Association Resource (KARE) project. The overall changes in the population sizes of the total populations were similar to CHB (Han Chinese in Beijing, China) and JPT (Japanese in Tokyo, Japan) of the HapMap project. There were no differences in past changes in population sizes with a comparison between an urban area and a rural area. Age-dependent current and recent effective population sizes represent the modern history of Korean populations, including the effects of World War II, the Korean War, and urbanization. The oldest age group showed that the population growth of Koreans had already been substantial at least since the end of the 19th century.

  6. Chemically intuited, large-scale screening of MOFs by machine learning techniques

    NASA Astrophysics Data System (ADS)

    Borboudakis, Giorgos; Stergiannakos, Taxiarchis; Frysali, Maria; Klontzas, Emmanuel; Tsamardinos, Ioannis; Froudakis, George E.

    2017-10-01

    A novel computational methodology for large-scale screening of MOFs is applied to gas storage with the use of machine learning technologies. This approach is a promising trade-off between the accuracy of ab initio methods and the speed of classical approaches, strategically combined with chemical intuition. The results demonstrate that the chemical properties of MOFs are indeed predictable (stochastically, not deterministically) using machine learning methods and automated analysis protocols, with the accuracy of predictions increasing with sample size. Our initial results indicate that this methodology is promising to apply not only to gas storage in MOFs but in many other material science projects.

  7. Composite structural materials

    NASA Technical Reports Server (NTRS)

    Ansell, G. S.; Loewy, R. G.; Wiberley, S. E.

    1984-01-01

    Progress is reported in studies of constituent materials composite materials, generic structural elements, processing science technology, and maintaining long-term structural integrity. Topics discussed include: mechanical properties of high performance carbon fibers; fatigue in composite materials; experimental and theoretical studies of moisture and temperature effects on the mechanical properties of graphite-epoxy laminates and neat resins; numerical investigations of the micromechanics of composite fracture; delamination failures of composite laminates; effect of notch size on composite laminates; improved beam theory for anisotropic materials; variation of resin properties through the thickness of cured samples; numerical analysis composite processing; heat treatment of metal matrix composites, and the RP-1 and RP2 gliders of the sailplane project.

  8. Music therapy and the effects on laboring women.

    PubMed

    Robinson, Amber

    2002-01-01

    Wiand's (1997) study supported the use of music therapy to decrease pain and anxiety. The results of this study could be used to support a research utilization project to educate nurses on the potential benefits of music therapy among laboring women. Nurses and physicians could collaborate together to educate clients on music therapy to decrease pain and anxiety. Feasibility issues would include education of the nurses to use music therapy and the cost of developing different types of music. Future research could be done to study a larger sample size. Other research is needed to determine what type of music works best with laboring women.

  9. 76 FR 56141 - Notice of Intent To Request New Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-12

    ... level surveys of similar scope and size. The sample for each selected community will be strategically... of 2 hours per sample community. Full Study: The maximum sample size for the full study is 2,812... questionnaires. The initial sample size for this phase of the research is 100 respondents (10 respondents per...

  10. A new device to estimate abundance of moist-soil plant seeds

    USGS Publications Warehouse

    Penny, E.J.; Kaminski, R.M.; Reinecke, K.J.

    2006-01-01

    Methods to sample the abundance of moist-soil seeds efficiently and accurately are critical for evaluating management practices and determining food availability. We adapted a portable, gasoline-powered vacuum to estimate abundance of seeds on the surface of a moist-soil wetland in east-central Mississippi and evaluated the sampler by simulating conditions that researchers and managers may experience when sampling moist-soil areas for seeds. We measured the percent recovery of known masses of seeds by the vacuum sampler in relation to 4 experimentally controlled factors (i.e., seed-size class, sample mass, soil moisture class, and vacuum time) with 2-4 levels per factor. We also measured processing time of samples in the laboratory. Across all experimental factors, seed recovery averaged 88.4% and varied little (CV = 0.68%, n = 474). Overall, mean time to process a sample was 30.3 ? 2.5 min (SE, n = 417). Our estimate of seed recovery rate (88%) may be used to adjust estimates for incomplete seed recovery, or project-specific correction factors may be developed by investigators. Our device was effective for estimating surface abundance of moist-soil plant seeds after dehiscence and before habitats were flooded.

  11. After Sample-Delivery Attempt, Sol 62

    NASA Technical Reports Server (NTRS)

    2008-01-01

    NASA's Phoenix Mars Lander collected a soil sample and attempted to deliver some of it to a laboratory oven on the deck during the mission's 62nd Martian day, or sol, (July 28, 2008). The sample came from a hard layer at the bottom of the 'Snow White' trench and might have contained water ice mixed with the soil. This image taken after the attempt to deliver the sample through the open doors to cell number zero on the Thermal and Evolved-Gas Analyzer shows that very little of the soil fell onto the screened opening.

    Not enough material reached the oven, through a funnel under the screen, to proceed with analysis of the sample material.

    Phoenix's Robotic Arm Camera took this image at 7:54 a.m. local solar time on Sol 62. The size of the screened opening is about 10 centimeters (4 inches) long by 4 centimeters (1.5 inches) wide.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  12. Evaluation of dredged material proposed for ocean disposal from Hudson River, New York

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardiner, W.W.; Barrows, E.S.; Antrim, L.D.

    1996-09-01

    The Hudson River (Federal Project No. 41) was one of seven waterways that the U.S. Army Corps of Engineers-New York District (USACE-NYD) requested the Battelle Marine Sciences Laboratory (MSL) to sample and evaluate for dredging and disposal in March 1994. Sediment samples were collected from the Hudson River. Tests and analyses were conducted on Hudson River sediment core samples. The evaluation of proposed dredged material from the Hudson River included bulk sediment chemical analyses, chemical analyses of site water and elutriate, water-column and benthic acute toxicity tests, and bioaccumulation studies. Individual sediment core samples collected from Hudson River were analyzedmore » for grain size, moisture content, and total organic carbon (TOC). A composite sediment sample, representing the entire area proposed for dredging, was analyzed for bulk density, specific gravity, metals, chlorinated pesticides, polychlorinated biphenyl (PCB) congeners, polynuclear aromatic hydrocarbons (PAH), and 1,4-dichlorobenzene. Site water and elutriate water, prepared from the suspended-particulate phase (SPP) of Hudson River sediment, were analyzed for metals, pesticides, and PCBS. Water-column or SPP toxicity tests were performed with three species. Benthic acute toxicity tests were performed. Bioaccumulation tests were also conducted.« less

  13. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    ERIC Educational Resources Information Center

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  14. [Practical aspects regarding sample size in clinical research].

    PubMed

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  15. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    PubMed

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  16. Formation of vacancy clusters and cavities in He-implanted silicon studied by slow-positron annihilation spectroscopy

    NASA Astrophysics Data System (ADS)

    Brusa, Roberto S.; Karwasz, Grzegorz P.; Tiengo, Nadia; Zecca, Antonio; Corni, Federico; Tonini, Rita; Ottaviani, Gianpiero

    2000-04-01

    The depth profile of open volume defects has been measured in Si implanted with He at an energy of 20 keV, by means of a slow-positron beam and the Doppler broadening technique. The evolution of defect distributions has been studied as a function of isochronal annealing in two series of samples implanted at the fluence of 5×1015 and 2×1016 He cm-2. A fitting procedure has been applied to the experimental data to extract a positron parameter characterizing each open volume defect. The defects have been identified by comparing this parameter with recent theoretical calculations. In as-implanted samples the major part of vacancies and divacancies produced by implantation is passivated by the presence of He. The mean depth of defects as seen by the positron annihilation technique is about five times less than the helium projected range. During the successive isochronal annealing the number of positron traps decreases, then increases and finally, at the highest annealing temperatures, disappears only in the samples implanted at the lowest fluence. A minimum of open volume defects is reached at the annealing temperature of 250 °C in both series. The increase of open volume defects at temperatures higher than 250 °C is due to the appearance of vacancy clusters of increasing size, with a mean depth distribution that moves towards the He projected range. The appearance of vacancy clusters is strictly related to the out diffusion of He. In the samples implanted at 5×1015 cm-2 the vacancy clusters are mainly four vacancy agglomerates stabilized by He related defects. They disappear starting from an annealing temperature of 700 °C. In the samples implanted at 2×1016 cm-2 and annealed at 850-900 °C the vacancy clusters disappear and only a distribution of cavities centered around the He projected range remains. The role of vacancies in the formation of He clusters, which evolve in bubble and then in cavities, is discussed.

  17. Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests

    Treesearch

    Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford

    1995-01-01

    To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...

  18. Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods

    DTIC Science & Technology

    2016-11-01

    ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample

  19. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  20. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach

    NASA Technical Reports Server (NTRS)

    Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.

  1. Managing Risk and Uncertainty in Large-Scale University Research Projects

    ERIC Educational Resources Information Center

    Moore, Sharlissa; Shangraw, R. F., Jr.

    2011-01-01

    Both publicly and privately funded research projects managed by universities are growing in size and scope. Complex, large-scale projects (over $50 million) pose new management challenges and risks for universities. This paper explores the relationship between project success and a variety of factors in large-scale university projects. First, we…

  2. Electrical and magnetic properties of nano-sized magnesium ferrite

    NASA Astrophysics Data System (ADS)

    T, Smitha; X, Sheena; J, Binu P.; Mohammed, E. M.

    2015-02-01

    Nano-sized magnesium ferrite was synthesized using sol-gel techniques. Structural characterization was done using X-ray diffractometer and Fourier Transform Infrared Spectrometer. Vibration Sample Magnetometer was used to record the magnetic measurements. XRD analysis reveals the prepared sample is single phasic without any impurity. Particle size calculation shows the average crystallite size of the sample is 19nm. FTIR analysis confirmed spinel structure of the prepared samples. Magnetic measurement study shows that the sample is ferromagnetic with high degree of isotropy. Hysterisis loop was traced at temperatures 100K and 300K. DC electrical resistivity measurements show semiconducting nature of the sample.

  3. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  4. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    PubMed

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  5. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    NASA Astrophysics Data System (ADS)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  6. A Sample Handling System for Mars Sample Return - Design and Status

    NASA Astrophysics Data System (ADS)

    Allouis, E.; Renouf, I.; Deridder, M.; Vrancken, D.; Gelmi, R.; Re, E.

    2009-04-01

    A mission to return atmosphere and soil samples form the Mars is highly desired by planetary scientists from around the world and space agencies are starting preparation for the launch of a sample return mission in the 2020 timeframe. Such a mission would return approximately 500 grams of atmosphere, rock and soil samples to Earth by 2025. Development of a wide range of new technology will be critical to the successful implementation of such a challenging mission. Technical developments required to realise the mission include guided atmospheric entry, soft landing, sample handling robotics, biological sealing, Mars atmospheric ascent sample rendezvous & capture and Earth return. The European Space Agency has been performing system definition studies along with numerous technology development studies under the framework of the Aurora programme. Within the scope of these activities Astrium has been responsible for defining an overall sample handling architecture in collaboration with European partners (sample acquisition and sample capture, Galileo Avionica; sample containment and automated bio-sealing, Verhaert). Our work has focused on the definition and development of the robotic systems required to move the sample through the transfer chain. This paper presents the Astrium team's high level design for the surface transfer system and the orbiter transfer system. The surface transfer system is envisaged to use two robotic arms of different sizes to allow flexible operations and to enable sample transfer over relatively large distances (~2 to 3 metres): The first to deploy/retract the Drill Assembly used for sample collection, the second for the transfer of the Sample Container (the vessel containing all the collected samples) from the Drill Assembly to the Mars Ascent Vehicle (MAV). The sample transfer actuator also features a complex end-effector for handling the Sample Container. The orbiter transfer system will transfer the Sample Container from the capture mechanism through a bio-sealing system to the Earth Return Capsule (ERC) and has distinctly different requirements from the surface transfer system. The operations required to transfer the samples to the ERC are clearly defined and make use of mechanisms specifically designed for the job rather than robotic arms. Though it is mechanical rather than robotic, the design of the orbiter transfer system is very complex in comparison to most previous missions to fulfil all the scientific and technological requirements. Further mechanisms will be required to lock the samples into the ERC and to close the door at the rear of the ERC through which the samples have been inserted. Having performed this overall definition study, Astrium is now leading the next step of the development of the MSR sample handling: the Mars Surface Sample Transfer and Manipulation project (MSSTM). Organised in two phases, the project will re-evaluate in phase 1 the output of the previous study in the light of new inputs (e.g. addition of a rover) and investigate further the architectures and systems involved in the sample transfer chain while identifying the critical technologies. The second phase of the project will concentrate on the prototyping of a number of these key technologies with the goal of providing an end-to end validation of the surface sample transfer concept.

  7. A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models

    ERIC Educational Resources Information Center

    Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.

    2013-01-01

    Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…

  8. A computer program for sample size computations for banding studies

    USGS Publications Warehouse

    Wilson, K.R.; Nichols, J.D.; Hines, J.E.

    1989-01-01

    Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.

  9. Probability of coincidental similarity among the orbits of small bodies - I. Pairing

    NASA Astrophysics Data System (ADS)

    Jopek, Tadeusz Jan; Bronikowska, Małgorzata

    2017-09-01

    Probability of coincidental clustering among orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4,… members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We have tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample can be used to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.

  10. Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products

    NASA Astrophysics Data System (ADS)

    Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun

    2011-10-01

    To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."

  11. Sample size considerations using mathematical models: an example with Chlamydia trachomatis infection and its sequelae pelvic inflammatory disease.

    PubMed

    Herzog, Sereina A; Low, Nicola; Berghold, Andrea

    2015-06-19

    The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.

  12. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review

    PubMed Central

    Morris, Tom; Gray, Laura

    2017-01-01

    Objectives To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Setting Any, not limited to healthcare settings. Participants Any taking part in an SW-CRT published up to March 2016. Primary and secondary outcome measures The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Results Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22–0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Conclusions Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. PMID:29146637

  13. Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples

    NASA Astrophysics Data System (ADS)

    Petit, Johan; Lallemant, Lucile

    2017-05-01

    In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.

  14. Difference between healthy children and ADHD based on wavelet spectral analysis of nuclear magnetic resonance images

    NASA Astrophysics Data System (ADS)

    González Gómez, Dulce I.; Moreno Barbosa, E.; Martínez Hernández, Mario Iván; Ramos Méndez, José; Hidalgo Tobón, Silvia; Dies Suarez, Pilar; Barragán Pérez, Eduardo; De Celis Alonso, Benito

    2014-11-01

    The main goal of this project was to create a computer algorithm based on wavelet analysis of region of homogeneity images obtained during resting state studies. Ideally it would automatically diagnose ADHD. Because the cerebellum is an area known to be affected by ADHD, this study specifically analysed this region. Male right handed volunteers (infants with ages between 7 and 11 years old) were studied and compared with age matched controls. Statistical differences between the values of the absolute integrated wavelet spectrum were found and showed significant differences (p<0.0015) between groups. This difference might help in the future to distinguish healthy from ADHD patients and therefore diagnose ADHD. Even if results were statistically significant, the small size of the sample limits the applicability of this methods as it is presented here, and further work with larger samples and using freely available datasets must be done.

  15. Working at the social-clinical-community-criminology interface: The GMU Inmate Study.

    PubMed

    Tangney, June Price; Mashek, Debra; Stuewig, Jeffrey

    2007-01-01

    This paper describes our attempt to import social-personality theory and research on moral emotions and moral cognitions to applied problems of crime, substance abuse, and HIV risk behavior. Thus far, in an inmate sample, we have evidence that criminogenic beliefs and proneness to guilt are each predictive of re-offense after release from jail. In addition, we have evidence that jail programs and services may reduce criminogenic beliefs and enhance adaptive feelings of guilt. As our sample size increases, our next step is to test the full mediational model, examining the degree to which programs and services impact post-release desistance via their effect on moral emotions and cognitions. In addition to highlighting some of the key findings from our longitudinal study of jail inmates over the period of incarceration and post-release, we describe the origins and development of this interdisciplinary project, highlighting the challenges and rewards of such endeavors.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    González Gómez Dulce, I., E-mail: isabeldgg@hotmail.com, E-mail: emoreno@fcfm.buap.mx, E-mail: mim@fcfm.buap.mx, E-mail: joserm84@gmail.com; Moreno Barbosa, E., E-mail: isabeldgg@hotmail.com, E-mail: emoreno@fcfm.buap.mx, E-mail: mim@fcfm.buap.mx, E-mail: joserm84@gmail.com; Hernández, Mario Iván Martínez, E-mail: isabeldgg@hotmail.com, E-mail: emoreno@fcfm.buap.mx, E-mail: mim@fcfm.buap.mx, E-mail: joserm84@gmail.com

    The main goal of this project was to create a computer algorithm based on wavelet analysis of region of homogeneity images obtained during resting state studies. Ideally it would automatically diagnose ADHD. Because the cerebellum is an area known to be affected by ADHD, this study specifically analysed this region. Male right handed volunteers (infants with ages between 7 and 11 years old) were studied and compared with age matched controls. Statistical differences between the values of the absolute integrated wavelet spectrum were found and showed significant differences (p<0.0015) between groups. This difference might help in the future to distinguishmore » healthy from ADHD patients and therefore diagnose ADHD. Even if results were statistically significant, the small size of the sample limits the applicability of this methods as it is presented here, and further work with larger samples and using freely available datasets must be done.« less

  17. Working at the social-clinical-community-criminology interface: The GMU Inmate Study

    PubMed Central

    Tangney, June Price; Mashek, Debra; Stuewig, Jeffrey

    2011-01-01

    This paper describes our attempt to import social-personality theory and research on moral emotions and moral cognitions to applied problems of crime, substance abuse, and HIV risk behavior. Thus far, in an inmate sample, we have evidence that criminogenic beliefs and proneness to guilt are each predictive of re-offense after release from jail. In addition, we have evidence that jail programs and services may reduce criminogenic beliefs and enhance adaptive feelings of guilt. As our sample size increases, our next step is to test the full mediational model, examining the degree to which programs and services impact post-release desistance via their effect on moral emotions and cognitions. In addition to highlighting some of the key findings from our longitudinal study of jail inmates over the period of incarceration and post-release, we describe the origins and development of this interdisciplinary project, highlighting the challenges and rewards of such endeavors. PMID:21572973

  18. The relationship between national-level carbon dioxide emissions and population size: an assessment of regional and temporal variation, 1960-2005.

    PubMed

    Jorgenson, Andrew K; Clark, Brett

    2013-01-01

    This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region.

  19. Sample size calculation for a proof of concept study.

    PubMed

    Yin, Yin

    2002-05-01

    Sample size calculation is vital for a confirmatory clinical trial since the regulatory agencies require the probability of making Type I error to be significantly small, usually less than 0.05 or 0.025. However, the importance of the sample size calculation for studies conducted by a pharmaceutical company for internal decision making, e.g., a proof of concept (PoC) study, has not received enough attention. This article introduces a Bayesian method that identifies the information required for planning a PoC and the process of sample size calculation. The results will be presented in terms of the relationships between the regulatory requirements, the probability of reaching the regulatory requirements, the goalpost for PoC, and the sample size used for PoC.

  20. Microphysical and Optical Properties of Saharan Dust Measured during the ICE-D Aircraft Campaign

    NASA Astrophysics Data System (ADS)

    Ryder, Claire; Marenco, Franco; Brooke, Jennifer; Cotton, Richard; Taylor, Jonathan

    2017-04-01

    During August 2015, the UK FAAM BAe146 research aircraft was stationed in Cape Verde off the coast of West Africa. Measurements of Saharan dust, and ice and liquid water clouds, were taken for the ICE-D (Ice in Clouds Experiment - Dust) project - a multidisciplinary project aimed at further understanding aerosol-cloud interactions. Six flights formed part of a sub-project, AER-D, solely focussing on measurements of Saharan dust within the African dust plume. Dust loadings observed during these flights varied (aerosol optical depths of 0.2 to 1.3), as did the vertical structure of the dust, the size distributions and the optical properties. The BAe146 was fully equipped to measure size distributions covering aerosol accumulation, coarse and giant modes. Initial results of size distribution and optical properties of dust from the AER-D flights will be presented, showing that a substantial coarse mode was present, in agreement with previous airborne measurements. Optical properties of dust relating to the measured size distributions will also be presented.

  1. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    PubMed

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  2. PROCEDURE FOR DETERMINATION OF SEDIMENT PARTICLE SIZE (GRAIN SIZE)

    EPA Science Inventory

    Sediment quality and sediment remediation projects have become a high priority for USEPA. Sediment particle size determinations are used in environmental assessments for habitat characterization, chemical normalization, and partitioning potential of chemicals. The accepted met...

  3. Variation in aluminum, iron, and particle concentrations in oxic groundwater samples collected by use of tangential-flow ultrafiltration with low-flow sampling

    NASA Astrophysics Data System (ADS)

    Szabo, Zoltan; Oden, Jeannette H.; Gibs, Jacob; Rice, Donald E.; Ding, Yuan

    2002-02-01

    Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering.

  4. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    PubMed

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  5. 40 CFR 35.1650-6 - Reports.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Reports. (a) States with Phase 1 projects shall submit semi-annual progress reports (original and one copy... in the next six months. (b) Phase 2. States with Phase 2 projects shall submit progress reports... Phase 2 project progress reports shall be determined by the size and complexity of the project, and...

  6. 40 CFR 35.1650-6 - Reports.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Reports. (a) States with Phase 1 projects shall submit semi-annual progress reports (original and one copy... in the next six months. (b) Phase 2. States with Phase 2 projects shall submit progress reports... Phase 2 project progress reports shall be determined by the size and complexity of the project, and...

  7. Year 2000 Computerized Farm Project. Final Report.

    ERIC Educational Resources Information Center

    McGrann, James M.; Lippke, Lawrence A.

    An ongoing project was funded to develop and demonstrate a computerized approach to operation and management of a commercial-sized farm. Other project objectives were to facilitate the demonstration of the computerized farm to the public and to develop individual software packages and make them available to the public. Project accomplishments…

  8. Sample size determination for equivalence assessment with multiple endpoints.

    PubMed

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  9. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    PubMed

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  10. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size.

    PubMed

    Heidel, R Eric

    2016-01-01

    Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  11. Parallelism in integrated fluidic circuits

    NASA Astrophysics Data System (ADS)

    Bousse, Luc J.; Kopf-Sill, Anne R.; Parce, J. W.

    1998-04-01

    Many research groups around the world are working on integrated microfluidics. The goal of these projects is to automate and integrate the handling of liquid samples and reagents for measurement and assay procedures in chemistry and biology. Ultimately, it is hoped that this will lead to a revolution in chemical and biological procedures similar to that caused in electronics by the invention of the integrated circuit. The optimal size scale of channels for liquid flow is determined by basic constraints to be somewhere between 10 and 100 micrometers . In larger channels, mixing by diffusion takes too long; in smaller channels, the number of molecules present is so low it makes detection difficult. At Caliper, we are making fluidic systems in glass chips with channels in this size range, based on electroosmotic flow, and fluorescence detection. One application of this technology is rapid assays for drug screening, such as enzyme assays and binding assays. A further challenge in this area is to perform multiple functions on a chip in parallel, without a large increase in the number of inputs and outputs. A first step in this direction is a fluidic serial-to-parallel converter. Fluidic circuits will be shown with the ability to distribute an incoming serial sample stream to multiple parallel channels.

  12. Microliter-sized ionization device and method

    NASA Technical Reports Server (NTRS)

    Simac, Robert M. (Inventor); Wernlund, Roger F. (Inventor); Cohen, Martin J. (Inventor)

    1999-01-01

    A microliter-sized metastable ionization device with a cavity, a sample gas inlet, a corona gas inlet and a gas outlet. A first electrode has a hollow and disposed in the cavity and is in fluid communication with the sample gas inlet. A second electrode is in fluid communication with the corona gas inlet and is disposed around the first electrode adjacent the hollow end thereof. A gap forming means forms a corona gap between the first and second electrodes. A first power supply is connected to the first electrode and the second power supply is connected to the second electrode for generating a corona discharge across the corona gap. A collector has a hollow end portion disposed in the cavity which is in fluid communications with the gas outlet for the outgassing and detection of ionized gases. The first electrode can be a tubular member aligned concentrically with a cylindrical second electrode. The gap forming means can be in annular disc projecting radially inwardly from the cylindrical second electrode. The collector can have a tubular opening aligned coaxially with the first electrode and has an end face spaced a short distance from an end face of the first electrode forming a small active volume therebetween for the generation and detection of small quantities of trace analytes.

  13. INTERIM REPORT--INDEPENDENT VERIFICATION SURVEY OF SECTION 3, SURVEY UNITS 1, 4 AND 5 EXCAVATED SURFACES, WHITTAKER CORPORATION, REYNOLDS INDUSTRIAL PARK, TRANSFER, PENNSYLVANIA DCN: 5002-SR-04-0"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ADAMS, WADE C

    At Pennsylvania Department of Environmental Protection's request, ORAU's IEAV program conducted verification surveys on the excavated surfaces of Section 3, SUs 1, 4, and 5 at the Whittaker site on March 13 and 14, 2013. The survey activities included visual inspections, gamma radiation surface scans, gamma activity measurements, and soil sampling activities. Verification activities also included the review and assessment of the licensee's project documentation and methodologies. Surface scans identified four areas of elevated direct gamma radiation distinguishable from background; one area within SUs 1 and 4 and two areas within SU5. One area within SU5 was remediated by removingmore » a golf ball size piece of slag while ORAU staff was onsite. With the exception of the golf ball size piece of slag within SU5, a review of the ESL Section 3 EXS data packages for SUs 1, 4, and 5 indicated that these locations of elevated gamma radiation were also identified by the ESL gamma scans and that ESL personnel performed additional investigations and soil sampling within these areas. The investigative results indicated that the areas met the release criteria.« less

  14. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    PubMed

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  15. 40 CFR 80.127 - Sample size guidelines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section 80.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the...

  16. Methods for sample size determination in cluster randomized trials

    PubMed Central

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-01-01

    Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515

  17. An imbalance in cluster sizes does not lead to notable loss of power in cross-sectional, stepped-wedge cluster randomised trials with a continuous outcome.

    PubMed

    Kristunas, Caroline A; Smith, Karen L; Gray, Laura J

    2017-03-07

    The current methodology for sample size calculations for stepped-wedge cluster randomised trials (SW-CRTs) is based on the assumption of equal cluster sizes. However, as is often the case in cluster randomised trials (CRTs), the clusters in SW-CRTs are likely to vary in size, which in other designs of CRT leads to a reduction in power. The effect of an imbalance in cluster size on the power of SW-CRTs has not previously been reported, nor what an appropriate adjustment to the sample size calculation should be to allow for any imbalance. We aimed to assess the impact of an imbalance in cluster size on the power of a cross-sectional SW-CRT and recommend a method for calculating the sample size of a SW-CRT when there is an imbalance in cluster size. The effect of varying degrees of imbalance in cluster size on the power of SW-CRTs was investigated using simulations. The sample size was calculated using both the standard method and two proposed adjusted design effects (DEs), based on those suggested for CRTs with unequal cluster sizes. The data were analysed using generalised estimating equations with an exchangeable correlation matrix and robust standard errors. An imbalance in cluster size was not found to have a notable effect on the power of SW-CRTs. The two proposed adjusted DEs resulted in trials that were generally considerably over-powered. We recommend that the standard method of sample size calculation for SW-CRTs be used, provided that the assumptions of the method hold. However, it would be beneficial to investigate, through simulation, what effect the maximum likely amount of inequality in cluster sizes would be on the power of the trial and whether any inflation of the sample size would be required.

  18. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach. [Kansas

    NASA Technical Reports Server (NTRS)

    Hixson, M. M.; Bauer, M. E.; Davis, B. J.

    1979-01-01

    The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.

  19. The UK Biobank sample handling and storage protocol for the collection, processing and archiving of human blood and urine.

    PubMed

    Elliott, Paul; Peakman, Tim C

    2008-04-01

    UK Biobank is a large prospective study in the UK to investigate the role of genetic factors, environmental exposures and lifestyle in the causes of major diseases of late and middle age. Extensive data and biological samples are being collected from 500,000 participants aged between 40 and 69 years. The biological samples that are collected and how they are processed and stored will have a major impact on the future scientific usefulness of the UK Biobank resource. The aim of the UK Biobank sample handling and storage protocol is to specify methods for the collection and storage of participant samples that give maximum scientific return within the available budget. Processing or storage methods that, as far as can be predicted, will preclude current or future assays have been avoided. The protocol was developed through a review of the literature on sample handling and processing, wide consultation within the academic community and peer review. Protocol development addressed which samples should be collected, how and when they should be processed and how the processed samples should be stored to ensure their long-term integrity. The recommended protocol was extensively tested in a series of validation studies. UK Biobank collects about 45 ml blood and 9 ml of urine with minimal local processing from each participant using the vacutainer system. A variety of preservatives, anti-coagulants and clot accelerators is used appropriate to the expected end use of the samples. Collection of other material (hair, nails, saliva and faeces) was also considered but rejected for the full cohort. Blood and urine samples from participants are transported overnight by commercial courier to a central laboratory where they are processed and aliquots of urine, plasma, serum, white cells and red cells stored in ultra-low temperature archives. Aliquots of whole blood are also stored for potential future production of immortalized cell lines. A standard panel of haematology assays is completed on whole blood from all participants, since such assays need to be conducted on fresh samples (whereas other assays can be done on stored samples). By the end of the recruitment phase, 15 million sample aliquots will be stored in two geographically separate archives: 9.5 million in a -80 degrees C automated archive and 5.5 million in a manual liquid nitrogen archive at -180 degrees C. Because of the size of the study and the numbers of samples obtained from participants, the protocol stipulates a highly automated approach for the processing and storage of samples. Implementation of the processes, technology, systems and facilities has followed best practices used in manufacturing industry to reduce project risk and to build in quality and robustness. The data produced from sample collection, processing and storage are highly complex and are managed by a commercially available LIMS system fully integrated with the entire process. The sample handling and storage protocol adopted by UK Biobank provides quality assured and validated methods that are feasible within the available funding and reflect the size and aims of the project. Experience from recruiting and processing the first 40,000 participants to the study demonstrates that the adopted methods and technologies are fit-for-purpose and robust.

  20. 7 CFR 4279.261 - Application for loan guarantee content.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... project; (2) The viability of such technology for the particular project application; (3) The development... location, size, etc.). Economic feasibility determination. Market feasibility determination. Technical.... Recommendations for implementation. (B) Economic Feasibility: Information regarding project site; Availability of...

Top