Sample records for sample sizes lack

  1. Effects of Group Size and Lack of Sphericity on the Recovery of Clusters in K-means Cluster Analysis.

    PubMed

    Craen, Saskia de; Commandeur, Jacques J F; Frank, Laurence E; Heiser, Willem J

    2006-06-01

    K-means cluster analysis is known for its tendency to produce spherical and equally sized clusters. To assess the magnitude of these effects, a simulation study was conducted, in which populations were created with varying departures from sphericity and group sizes. An analysis of the recovery of clusters in the samples taken from these populations showed a significant effect of lack of sphericity and group size. This effect was, however, not as large as expected, with still a recovery index of more than 0.5 in the "worst case scenario." An interaction effect between the two data aspects was also found. The decreasing trend in the recovery of clusters for increasing departures from sphericity is different for equal and unequal group sizes.

  2. Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.

    PubMed

    McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M

    2015-03-01

    Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.

  3. Effects of Group Size and Lack of Sphericity on the Recovery of Clusters in K-Means Cluster Analysis

    ERIC Educational Resources Information Center

    de Craen, Saskia; Commandeur, Jacques J. F.; Frank, Laurence E.; Heiser, Willem J.

    2006-01-01

    K-means cluster analysis is known for its tendency to produce spherical and equally sized clusters. To assess the magnitude of these effects, a simulation study was conducted, in which populations were created with varying departures from sphericity and group sizes. An analysis of the recovery of clusters in the samples taken from these…

  4. Comparison of day snorkeling, night snorkeling, and electrofishing to estimate bull trout abundance and size structure in a second-order Idaho stream

    Treesearch

    Russell F. Thurow; Daniel J. Schill

    1996-01-01

    Biologists lack sufficient information to develop protocols for sampling the abundance and size structure of bull trout Salvelinus confluentus. We compared summer estimates of the abundance and size structure of bull trout in a second-order central Idaho stream, derived by day snorkeling, night snorkeling, and electrofishing. We also examined the influence of water...

  5. The Effects of Maternal Social Phobia on Mother-Infant Interactions and Infant Social Responsiveness

    ERIC Educational Resources Information Center

    Murray, Lynne; Cooper, Peter; Creswell, Cathy; Schofield, Elizabeth; Sack, Caroline

    2007-01-01

    Background: Social phobia aggregates in families. The genetic contribution to intergenerational transmission is modest, and parenting is considered important. Research on the effects of social phobia on parenting has been subject to problems of small sample size, heterogeneity of samples and lack of specificity of observational frameworks. We…

  6. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  7. Sample size calculation in economic evaluations.

    PubMed

    Al, M J; van Hout, B A; Michel, B C; Rutten, F F

    1998-06-01

    A simulation method is presented for sample size calculation in economic evaluations. As input the method requires: the expected difference and variance of costs and effects, their correlation, the significance level (alpha) and the power of the testing method and the maximum acceptable ratio of incremental effectiveness to incremental costs. The method is illustrated with data from two trials. The first compares primary coronary angioplasty with streptokinase in the treatment of acute myocardial infarction, in the second trial, lansoprazole is compared with omeprazole in the treatment of reflux oesophagitis. These case studies show how the various parameters influence the sample size. Given the large number of parameters that have to be specified in advance, the lack of knowledge about costs and their standard deviation, and the difficulty of specifying the maximum acceptable ratio of incremental effectiveness to incremental costs, the conclusion of the study is that from a technical point of view it is possible to perform a sample size calculation for an economic evaluation, but one should wonder how useful it is.

  8. Flow field-flow fractionation for the analysis of nanoparticles used in drug delivery.

    PubMed

    Zattoni, Andrea; Roda, Barbara; Borghi, Francesco; Marassi, Valentina; Reschiglian, Pierluigi

    2014-01-01

    Structured nanoparticles (NPs) with controlled size distribution and novel physicochemical features present fundamental advantages as drug delivery systems with respect to bulk drugs. NPs can transport and release drugs to target sites with high efficiency and limited side effects. Regulatory institutions such as the US Food and Drug Administration (FDA) and the European Commission have pointed out that major limitations to the real application of current nanotechnology lie in the lack of homogeneous, pure and well-characterized NPs, also because of the lack of well-assessed, robust routine methods for their quality control and characterization. Many properties of NPs are size-dependent, thus the particle size distribution (PSD) plays a fundamental role in determining the NP properties. At present, scanning and transmission electron microscopy (SEM, TEM) are among the most used techniques to size characterize NPs. Size-exclusion chromatography (SEC) is also applied to the size separation of complex NP samples. SEC selectivity is, however, quite limited for very large molar mass analytes such as NPs, and interactions with the stationary phase can alter NP morphology. Flow field-flow fractionation (F4) is increasingly used as a mature separation method to size sort and characterize NPs in native conditions. Moreover, the hyphenation with light scattering (LS) methods can enhance the accuracy of size analysis of complex samples. In this paper, the applications of F4-LS to NP analysis used as drug delivery systems for their size analysis, and the study of stability and drug release effects are reviewed. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. A roughness-corrected index of relative bed stability for regional stream surveys

    EPA Science Inventory

    Quantitative regional assessments of streambed sedimentation and its likely causes are hampered because field investigations typically lack the requisite sample size, measurements, or precision for sound geomorphic and statistical interpretation. We adapted an index of relative b...

  10. Estimating the quadratic mean diameters of fine woody debris in forests of the United States

    Treesearch

    Christopher W. Woodall; Vicente J. Monleon

    2010-01-01

    Most fine woody debris (FWD) line-intersect sampling protocols and associated estimators require an approximation of the quadratic mean diameter (QMD) of each individual FWD size class. There is a lack of empirically derived QMDs by FWD size class and species/forest type across the U.S. The objective of this study is to evaluate a technique known as the graphical...

  11. Design of Phase II Non-inferiority Trials.

    PubMed

    Jung, Sin-Ho

    2017-09-01

    With the development of inexpensive treatment regimens and less invasive surgical procedures, we are confronted with non-inferiority study objectives. A non-inferiority phase III trial requires a roughly four times larger sample size than that of a similar standard superiority trial. Because of the large required sample size, we often face feasibility issues to open a non-inferiority trial. Furthermore, due to lack of phase II non-inferiority trial design methods, we do not have an opportunity to investigate the efficacy of the experimental therapy through a phase II trial. As a result, we often fail to open a non-inferiority phase III trial and a large number of non-inferiority clinical questions still remain unanswered. In this paper, we want to develop some designs for non-inferiority randomized phase II trials with feasible sample sizes. At first, we review a design method for non-inferiority phase III trials. Subsequently, we propose three different designs for non-inferiority phase II trials that can be used under different settings. Each method is demonstrated with examples. Each of the proposed design methods is shown to require a reasonable sample size for non-inferiority phase II trials. The three different non-inferiority phase II trial designs are used under different settings, but require similar sample sizes that are typical for phase II trials.

  12. Lack of size selectivity for paddlefish captured in hobbled gillnets

    USGS Publications Warehouse

    Scholten, G.D.; Bettoli, P.W.

    2007-01-01

    A commercial fishery for paddlefish Polyodon spathula caviar exists in Kentucky Lake, a reservoir on the lower Tennessee River. A 152-mm (bar-measure) minimum mesh size restriction on entanglement gear was enacted in 2002 and the minimum size limit was increased to 864 mm eye-fork length to reduce the possibility of recruitment overfishing. Paddlefish were sampled in 2003-2004 using experimental monofilament gillnets with panels of 89, 102, 127, 152, 178, and 203-mm meshes and the efficacy of the mesh size restriction was evaluated. Following the standards of commercial gear used in that fishery, nets were "hobbled" (i.e., 128 m ?? 3.6 m nets were tied down to 2.4 m; 91 m ?? 9.1 m nets were tied down to 7.6 m). The mean lengths of paddlefish (Ntotal = 576 fish) captured in each mesh were similar among most meshes and bycatch rates of sublegal fish did not vary with mesh size. Selectivity curves could not be modeled because the mean and modal lengths of fish captured in each mesh did not increase with mesh size. Ratios of fish girth to mesh perimeter (G:P) for individual fish were often less than 1.0 as a result of the largest meshes capturing small paddlefish. It is unclear whether lack of size selectivity for paddlefish was because the gillnets were hobbled, the unique morphology of paddlefish, or the fact that they swim with their mouths agape when filter feeding. The lack of size selectivity by hobbled gillnets fished in Kentucky Lake means that managers cannot influence the size of paddlefish captured by commercial gillnet gear by changing minimum mesh size regulations. ?? 2006 Elsevier B.V. All rights reserved.

  13. Impacts of Industrial Wind Turbine Noise on Sleep Quality: Results From a Field Study of Rural Residents in Ontario, Canada.

    PubMed

    Lane, James D; Bigelow, Philip L; Majowicz, Shannon E; McColl, R Stephen

    2016-07-01

    The objectives of this study were to determine whether grid-connected industrial wind turbines (IWTs) are a risk factor for poor sleep quality, and if IWT noise is associated with sleep parameters in rural Ontarians. A daily sleep diary and actigraphy-derived measures of sleep were obtained from 12 participants from an IWT community and 10 participants from a comparison community with no wind power installations. The equivalent and maximum sound pressure levels within the bedroom were also assessed. No statistically significant differences were observed between IWT residents and non-IWT residents for any of the parameters measured in this study. Actigraphy and sleep diaries are feasible tools to understand the impact of IWTs on the quality of sleep for nearby residents. Further studies with larger sample sizes should be conducted to determine whether the lack of statistical significance observed here is a result of sample size, or reflects a true lack of association.

  14. Sample Size in Clinical Cardioprotection Trials Using Myocardial Salvage Index, Infarct Size, or Biochemical Markers as Endpoint.

    PubMed

    Engblom, Henrik; Heiberg, Einar; Erlinge, David; Jensen, Svend Eggert; Nordrehaug, Jan Erik; Dubois-Randé, Jean-Luc; Halvorsen, Sigrun; Hoffmann, Pavel; Koul, Sasha; Carlsson, Marcus; Atar, Dan; Arheden, Håkan

    2016-03-09

    Cardiac magnetic resonance (CMR) can quantify myocardial infarct (MI) size and myocardium at risk (MaR), enabling assessment of myocardial salvage index (MSI). We assessed how MSI impacts the number of patients needed to reach statistical power in relation to MI size alone and levels of biochemical markers in clinical cardioprotection trials and how scan day affect sample size. Controls (n=90) from the recent CHILL-MI and MITOCARE trials were included. MI size, MaR, and MSI were assessed from CMR. High-sensitivity troponin T (hsTnT) and creatine kinase isoenzyme MB (CKMB) levels were assessed in CHILL-MI patients (n=50). Utilizing distribution of these variables, 100 000 clinical trials were simulated for calculation of sample size required to reach sufficient power. For a treatment effect of 25% decrease in outcome variables, 50 patients were required in each arm using MSI compared to 93, 98, 120, 141, and 143 for MI size alone, hsTnT (area under the curve [AUC] and peak), and CKMB (AUC and peak) in order to reach a power of 90%. If average CMR scan day between treatment and control arms differed by 1 day, sample size needs to be increased by 54% (77 vs 50) to avoid scan day bias masking a treatment effect of 25%. Sample size in cardioprotection trials can be reduced 46% to 65% without compromising statistical power when using MSI by CMR as an outcome variable instead of MI size alone or biochemical markers. It is essential to ensure lack of bias in scan day between treatment and control arms to avoid compromising statistical power. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  15. Cancer-Related Fatigue and Its Associations with Depression and Anxiety: A Systematic Review

    PubMed Central

    Brown, Linda F.; Kroenke, Kurt

    2010-01-01

    Background Fatigue is an important symptom in cancer and has been shown to be associated with psychological distress. Objectives This review assesses evidence regarding associations of CRF with depression and anxiety. Methods Database searches yielded 59 studies reporting correlation coefficients or odds ratios. Results Combined sample size was 12,103. Average correlation of fatigue with depression, weighted by sample size, was 0.56 and for anxiety, 0.46. Thirty-one instruments were used to assess fatigue, suggesting a lack of consensus on measurement. Conclusion This review confirms the association of fatigue with depression and anxiety. Directionality needs to be better delineated in longitudinal studies. PMID:19855028

  16. Behavioral Phenotype in Adults with Prader-Willi Syndrome

    ERIC Educational Resources Information Center

    Sinnema, Margje; Einfeld, Stewart L.; Schrander-Stumpel, Constance T. R. M.; Maaskant, Marian A.; Boer, Harm; Curfs, Leopold M. G.

    2011-01-01

    Prader-Willi syndrome (PWS) is characterized by temper tantrums, impulsivity, mood fluctuations, difficulty with change in routine, skinpicking, stubbornness and aggression. Many studies on behavior in PWS are limited by sample size, age range, a lack of genetically confirmed diagnosis of PWS and inconsistent assessment of behavior. The aim of…

  17. Fossil shrews from Honduras and their significance for late glacial evolution in body size (Mammalia: Soricidae: Cryptotis)

    USGS Publications Warehouse

    Woodman, N.; Croft, D.A.

    2005-01-01

    Our study of mammalian remains excavated in the 1940s from McGrew Cave, north of Copan, Honduras, yielded an assemblage of 29 taxa that probably accumulated predominantly as the result of predation by owls. Among the taxa present are three species of small-eared shrews, genus Cryptotis. One species, Cryptotis merriami, is relatively rare among the fossil remains. The other two shrews, Cryptotis goodwini and Cryptotis orophila, are abundant and exhibit morpho metrical variation distinguishing them from modern populations. Fossils of C. goodwini are distinctly and consistently smaller than modern members of the species. To quantify the size differences, we derived common measures of body size for fossil C. goodwini using regression models based on modern samples of shrews in the Cryptotis mexicana-group. Estimated mean length of head and body for the fossil sample is 72-79 mm, and estimated mean mass is 7.6-9.6 g. These numbers indicate that the fossil sample averaged 6-14% smaller in head and body length and 39-52% less in mass than the modern sample and that increases of 6-17% in head and body length and 65-108% in mass occurred to achieve the mean body size of the modern sample. Conservative estimates of fresh (wet) food intake based on mass indicate that such a size increase would require a 37-58% increase in daily food consumption. In contrast to C. goodwini, fossil C. orophila from the cave is not different in mean body size from modern samples. The fossil sample does, however, show slightly greater variation in size than is currently present throughout the modern geographical distribution of the taxon. Moreover, variation in some other dental and mandibular characters is more constrained, exhibiting a more direct relationship to overall size. Our study of these species indicates that North American shrews have not all been static in size through time, as suggested by some previous work with fossil soricids. Lack of stratigraphic control within the site and our failure to obtain reliable radiometric dates on remains restrict our opportunities to place the site in a firm temporal context. However, the morphometrical differences we document for fossil C. orophila and C. goodwini show them to be distinct from modern populations of these shrews. Some other species of fossil mammals from McGrew Cave exhibit distinct size changes of the magnitudes experienced by many northern North American and some Mexican mammals during the transition from late glacial to Holocene environmental conditions, and it is likely that at least some of the remains from the cave are late Pleistocene in age. One curious factor is that, whereas most mainland mammals that exhibit large-scale size shifts during the late glacial/postglacial transition experienced dwarfing, C. goodwini increased in size. The lack of clinal variation in modern C. goodwini supports the hypothesis that size evolution can result from local selection rather than from cline translocation. Models of size change in mammals indicate that increased size, such as that observed for C. goodwini, are a likely consequence of increased availability of resources and, thereby, a relaxation of selection during critical times of the year.

  18. Vitamin D receptor gene and osteoporosis - author`s response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Looney, J.E.; Yoon, Hyun Koo; Fischer, M.

    1996-04-01

    We appreciate the comments of Dr. Nguyen et al. about our recent study, but we disagree with their suggestion that the lack of an association between low bone density and the BB VDR genotype, which we reported, is an artifact generated by the small sample size. Furthermore, our results are consistent with similar conclusions reached by a number of other investigators, as recently reported by Peacock. Peacock states {open_quotes}Taken as a whole, the results of studies outlined ... indicate that VDR alleles, cannot account for the major part of the heritable component of bone density as indicated by Morrison etmore » al.{close_quotes}. The majority of the 17 studies cited in this editorial could not confirm an association between the VDR genotype and the bone phenotype. Surely one cannot criticize this combined work as representing an artifact because of a too small sample size. We do not dispute the suggestion by Nguyen et al. that large sample sizes are required to analyze small biological effects. This is evident in both Peacock`s summary and in their own bone density studies. We did not design our study with a larger sample size because, based on the work of Morrison et al., we had hypothesized a large biological effect; large sample sizes are only needed for small biological effects. 4 refs.« less

  19. An analysis of Apollo lunar soil samples 12070,889, 12030,187, and 12070,891: Basaltic diversity at the Apollo 12 landing site and implications for classification of small-sized lunar samples

    NASA Astrophysics Data System (ADS)

    Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.

    2016-09-01

    Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size <0.3 mm), a "paired samples t-test" can provide a statistical comparison between a particular sample and known lunar basalts. Of the fragments analyzed here, three are found to belong to each of the previously identified olivine and ilmenite basalt suites, four to the pigeonite basalt suite, one is an olivine cumulate, and two could not be categorized because of their coarse grain sizes and lack of appropriate mineral phases. Our approach introduces methods that can be used to investigate small sample sizes (i.e., fines) from future sample return missions to investigate lava flow diversity and petrological significance.

  20. Lack of association between ectoparasite intensities and rabies virus neutralizing antibody seroprevalence in wild big brown bats (Eptesicus fuscus), Fort Collins, Colorado

    USGS Publications Warehouse

    Pearce, R.D.; O'Shea, T.J.; Shankar, V.; Rupprecht, C.E.

    2007-01-01

    Recently, bat ectoparasites have been demonstrated to harbor pathogens of potential importance to humans. We evaluated antirabies antibody seroprevalence and the presence of ectoparasites in big brown bats (Eptesicus fuscus) sampled in 2002 and 2003 in Colorado to investigate if an association existed between ectoparasite intensity and exposure to rabies virus (RV). We used logistic regression and Akaike's Information Criteria adjusted for sample size (AICc) in a post-hoc analysis to investigate the relative importance of three ectoparasite species, as well as bat colony size, year sampled, age class, colony size, and year interaction on the presence of rabies virus neutralizing antibodies (VNA) in serum of wild E. fuscus. We obtained serum samples and ectoparasite counts from big brown bats simultaneously in 2002 and 2003. Although the presence of ectoparasites (Steatonyssus occidentalis and Spinturnix bakeri) were important in elucidating VNA seroprevalence, their intensities were higher in seronegative bats than in seropositive bats, and the presence of a third ectoparasite (Cimex pilosellus) was inconsequential. Colony size and year sampled were the most important variables in these AICc models. These findings suggest that these ectoparasites do not enhance exposure of big brown bats to RV. ?? 2007 Mary Ann Liebert, Inc.

  1. How to Measure the Onset of Babbling Reliably?

    ERIC Educational Resources Information Center

    Molemans, Inge; van den Berg, Renate; van Severen, Lieve; Gillis, Steven

    2012-01-01

    Various measures for identifying the onset of babbling have been proposed in the literature, but a formal definition of the exact procedure and a thorough validation of the sample size required for reliably establishing babbling onset is lacking. In this paper the reliability of five commonly used measures is assessed using a large longitudinal…

  2. Temporal dynamics of linkage disequilibrium in two populations of bighorn sheep

    PubMed Central

    Miller, Joshua M; Poissant, Jocelyn; Malenfant, René M; Hogg, John T; Coltman, David W

    2015-01-01

    Linkage disequilibrium (LD) is the nonrandom association of alleles at two markers. Patterns of LD have biological implications as well as practical ones when designing association studies or conservation programs aimed at identifying the genetic basis of fitness differences within and among populations. However, the temporal dynamics of LD in wild populations has received little empirical attention. In this study, we examined the overall extent of LD, the effect of sample size on the accuracy and precision of LD estimates, and the temporal dynamics of LD in two populations of bighorn sheep (Ovis canadensis) with different demographic histories. Using over 200 microsatellite loci, we assessed two metrics of multi-allelic LD, D′, and χ′2. We found that both populations exhibited high levels of LD, although the extent was much shorter in a native population than one that was founded via translocation, experienced a prolonged bottleneck post founding, followed by recent admixture. In addition, we observed significant variation in LD in relation to the sample size used, with small sample sizes leading to depressed estimates of the extent of LD but inflated estimates of background levels of LD. In contrast, there was not much variation in LD among yearly cross-sections within either population once sample size was accounted for. Lack of pronounced interannual variability suggests that researchers may not have to worry about interannual variation when estimating LD in a population and can instead focus on obtaining the largest sample size possible. PMID:26380673

  3. Potential Reporting Bias in Neuroimaging Studies of Sex Differences.

    PubMed

    David, Sean P; Naudet, Florian; Laude, Jennifer; Radua, Joaquim; Fusar-Poli, Paolo; Chu, Isabella; Stefanick, Marcia L; Ioannidis, John P A

    2018-04-17

    Numerous functional magnetic resonance imaging (fMRI) studies have reported sex differences. To empirically evaluate for evidence of excessive significance bias in this literature, we searched for published fMRI studies of human brain to evaluate sex differences, regardless of the topic investigated, in Medline and Scopus over 10 years. We analyzed the prevalence of conclusions in favor of sex differences and the correlation between study sample sizes and number of significant foci identified. In the absence of bias, larger studies (better powered) should identify a larger number of significant foci. Across 179 papers, median sample size was n = 32 (interquartile range 23-47.5). A median of 5 foci related to sex differences were reported (interquartile range, 2-9.5). Few articles (n = 2) had titles focused on no differences or on similarities (n = 3) between sexes. Overall, 158 papers (88%) reached "positive" conclusions in their abstract and presented some foci related to sex differences. There was no statistically significant relationship between sample size and the number of foci (-0.048% increase for every 10 participants, p = 0.63). The extremely high prevalence of "positive" results and the lack of the expected relationship between sample size and the number of discovered foci reflect probable reporting bias and excess significance bias in this literature.

  4. An empirical Bayes approach to analyzing recurring animal surveys

    USGS Publications Warehouse

    Johnson, D.H.

    1989-01-01

    Recurring estimates of the size of animal populations are often required by biologists or wildlife managers. Because of cost or other constraints, estimates frequently lack the accuracy desired but cannot readily be improved by additional sampling. This report proposes a statistical method employing empirical Bayes (EB) estimators as alternatives to those customarily used to estimate population size, and evaluates them by a subsampling experiment on waterfowl surveys. EB estimates, especially a simple limited-translation version, were more accurate and provided shorter confidence intervals with greater coverage probabilities than customary estimates.

  5. Economic Analysis of a Multi-Site Prevention Program: Assessment of Program Costs and Characterizing Site-level Variability

    PubMed Central

    Corso, Phaedra S.; Ingels, Justin B.; Kogan, Steven M.; Foster, E. Michael; Chen, Yi-Fu; Brody, Gene H.

    2013-01-01

    Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95% confidence interval) incremental difference was $2149 ($397, $3901). With the probabilistic sensitivity analysis approach, the incremental difference was $2583 ($778, $4346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention. PMID:23299559

  6. Economic analysis of a multi-site prevention program: assessment of program costs and characterizing site-level variability.

    PubMed

    Corso, Phaedra S; Ingels, Justin B; Kogan, Steven M; Foster, E Michael; Chen, Yi-Fu; Brody, Gene H

    2013-10-01

    Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95 % confidence interval) incremental difference was $2,149 ($397, $3,901). With the probabilistic sensitivity analysis approach, the incremental difference was $2,583 ($778, $4,346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention.

  7. Evaluation of blast furnace slag as basal media for eelgrass bed.

    PubMed

    Hizon-Fradejas, Amelia B; Nakano, Yoichi; Nakai, Satoshi; Nishijima, Wataru; Okada, Mitsumasa

    2009-07-30

    Two types of blast furnace slag (BFS), granulated (GS) and air-cooled slag (ACS), were evaluated as basal media for eelgrass bed. Evaluation was done by comparing BFS samples with natural eelgrass sediment (NES) in terms of some physico-chemical characteristics and then, investigating growth of eelgrass both in BFS and NES. In terms of particle size, both BFS samples were within the range acceptable for growing eelgrass. However, compared with NES, low silt-clay content for ACS and lack of organic matter content for both BFS samples were found. Growth experiment showed that eelgrass can grow in both types of BFS, although growth rates in BFS samples shown by leaf elongation were slower than that in NES. The possible reasons for stunted growth in BFS were assumed to be lack of organic matter and release of some possible toxins from BFS. Reduction of sulfide content of BFS samples did not result to enhanced growth; though sulfide release was eliminated, release of Zn was greater than before treatment and concentration of that reached to alarming amounts.

  8. The Mars Orbital Catalog of Hydrated Alteration Signatures (MOCHAS) - Initial release

    NASA Astrophysics Data System (ADS)

    Carter, John; OMEGA and CRISM Teams

    2016-10-01

    Aqueous minerals have been identified from orbit at a number of localities, and their analysis allowed refining the water story of Early Mars. They are also a main science driver when selecting current and upcoming landing sites for roving missions.Available catalogs of mineral detections exhibit a number of drawbacks such as a limited sample size (a thousand sites at most), inhomogeneous sampling of the surface and of the investigation methods, and the lack of contextual information (e.g. spatial extent, morphological context). The MOCHAS project strives to address such limitations by providing a global, detailed survey of aqueous minerals on Mars based on 10 years of data from the OMEGA and CRISM imaging spectrometers. Contextual data is provided, including deposit sizes, morphology and detailed composition when available. Sampling biases are also addressed.It will be openly distributed in GIS-ready format and will be participative. For example, it will be possible for researchers to submit requests for specific mapping of regions of interest, or add/refine mineral detections.An initial release is scheduled in Fall 2016 and will feature a two orders of magnitude increase in sample size compared to previous studies.

  9. Body size and extinction risk in terrestrial mammals above the species level.

    PubMed

    Tomiya, Susumu

    2013-12-01

    Mammalian body mass strongly correlates with life history and population properties at the scale of mouse to elephant. Large body size is thus often associated with elevated extinction risk. I examined the North American fossil record (28-1 million years ago) of 276 terrestrial genera to uncover the relationship between body size and extinction probability above the species level. Phylogenetic comparative analysis revealed no correlation between sampling-adjusted durations and body masses ranging 7 orders of magnitude, an observation that was corroborated by survival analysis. Most of the ecological and temporal groups within the data set showed the same lack of relationship. Size-biased generic extinctions do not constitute a general feature of the Holarctic mammalian faunas in the Neogene. Rather, accelerated loss of large mammals occurred during intervals that experienced combinations of regional aridification and increased biomic heterogeneity within continents. The latter phenomenon is consistent with the macroecological prediction that large geographic ranges are critical to the survival of large mammals in evolutionary time. The frequent lack of size selectivity in generic extinctions can be reconciled with size-biased species loss if extinctions of large and small mammals at the species level are often driven by ecological perturbations of different spatial and temporal scales, while those at the genus level are more synchronized in time as a result of fundamental, multiscale environmental shifts.

  10. Assessment of Cognitive Function in Breast Cancer and Lymphoma Patients Receiving Chemotherapy | Division of Cancer Prevention

    Cancer.gov

    Cognitive impairments in cancer patients represent an important clinical problem. Studies to date estimating prevalence of difficulties in memory, executive function, and attention deficits have been limited by small sample sizes and many have lacked healthy control groups. More information is needed on promising biomarkers and allelic variants that may help to determine the

  11. Is Some Data Better than No Data at All? Evaluating the Utility of Secondary Needs Assessment Data

    ERIC Educational Resources Information Center

    Shamblen, Stephen R.; Dwivedi, Pramod

    2010-01-01

    Needs assessments in substance abuse prevention often rely on secondary data measures of consumption and consequences to determine what population subgroup and geographic areas should receive a portion of limited resources. Although these secondary data measures have some benefits (e.g. large sample sizes, lack of survey response biases and cost),…

  12. The ICF Core Sets for hearing loss--researcher perspective. Part I: Systematic review of outcome measures identified in audiological research.

    PubMed

    Granberg, Sarah; Dahlström, Jennie; Möller, Claes; Kähäri, Kim; Danermark, Berth

    2014-02-01

    To review the literature in order to identify outcome measures used in research on adults with hearing loss (HL) as part of the ICF Core Sets development project, and to describe study and population characteristics of the reviewed studies. A systematic review methodology was applied using multiple databases. A comprehensive search was conducted and two search pools were created, pool I and pool II. The study population included adults (≥ 18 years of age) with HL and oral language as the primary mode of communication. 122 studies were included. Outcome measures were distinguished by 'instrument type', and 10 types were identified. In total, 246 (pool I) and 122 (pool II) different measures were identified, and only approximately 20% were extracted twice or more. Most measures were related to speech recognition. Fifty-one different questionnaires were identified. Many studies used small sample sizes, and the sex of participants was not revealed in several studies. The low prevalence of identified measures reflects a lack of consensus regarding the optimal outcome measures to use in audiology. Reflections and discussions are made in relation to small sample sizes and the lack of sex differentiation/descriptions within the included articles.

  13. How much is '5-a-day'? A qualitative investigation into consumer understanding of fruit and vegetable intake guidelines.

    PubMed

    Rooney, C; McKinley, M C; Appleton, K M; Young, I S; McGrath, A J; Draffin, C R; Hamill, L L; Woodside, J V

    2017-02-01

    Despite the known health benefits of fruit and vegetables (FV), population intakes remain low. One potential contributing factor may be a lack of understanding surrounding recommended intakes. The present study aimed to explore the understanding of FV intake guidelines among a sample of low FV consumers. Six semi-structured focus groups were held with low FV consumers (n = 28, age range 19-55 years). Focus groups were recorded digitally, transcribed verbatim and analysed thematically using nvivo (QSR International, Melbourne, Australia) to manage the coded data. Participants also completed a short questionnaire assessing knowledge on FV intake guidelines. Descriptive statistics were used to analyse responses. The discussions highlighted that, although participants were aware of FV intake guidelines, they lacked clarity with regard to the meaning of the '5-a-day' message, including what foods are included in the guideline, as well as what constitutes a portion of FV. There was also a sense of confusion surrounding the concept of achieving variety with regard to FV intake. The sample highlighted a lack of previous education on FV portion sizes and put forward suggestions for improving knowledge, including increased information on food packaging and through health campaigns. Questionnaire findings were generally congruent with the qualitative findings, showing high awareness of the '5-a-day' message but a lack of knowledge surrounding FV portion sizes. Future public health campaigns should consider how best to address the gaps in knowledge identified in the present study, and incorporate evaluations that will allow the impact of future initiatives on knowledge, and ultimately behaviour, to be investigated. © 2016 The British Dietetic Association Ltd.

  14. Size-segregated urban aerosol characterization by electron microscopy and dynamic light scattering and influence of sample preparation

    NASA Astrophysics Data System (ADS)

    Marvanová, Soňa; Kulich, Pavel; Skoupý, Radim; Hubatka, František; Ciganek, Miroslav; Bendl, Jan; Hovorka, Jan; Machala, Miroslav

    2018-04-01

    Size-segregated particulate matter (PM) is frequently used in chemical and toxicological studies. Nevertheless, toxicological in vitro studies working with the whole particles often lack a proper evaluation of PM real size distribution and characterization of agglomeration under the experimental conditions. In this study, changes in particle size distributions during the PM sample manipulation and also semiquantitative elemental composition of single particles were evaluated. Coarse (1-10 μm), upper accumulation (0.5-1 μm), lower accumulation (0.17-0.5 μm), and ultrafine (<0.17 μm) PM fractions were collected by high volume cascade impactor in Prague city center. Particles were examined using electron microscopy and their elemental composition was determined by energy dispersive X-ray spectroscopy. Larger or smaller particles, not corresponding to the impaction cut points, were found in all fractions, as they occur in agglomerates and are impacted according to their aerodynamic diameter. Elemental composition of particles in size-segregated fractions varied significantly. Ns-soot occurred in all size fractions. Metallic nanospheres were found in accumulation fractions, but not in ultrafine fraction where ns-soot, carbonaceous particles, and inorganic salts were identified. Dynamic light scattering was used to measure particle size distribution in water and in cell culture media. PM suspension of lower accumulation fraction in water agglomerated after freezing/thawing the sample, and the agglomerates were disrupted by subsequent sonication. Ultrafine fraction did not agglomerate after freezing/thawing the sample. Both lower accumulation and ultrafine fractions were stable in cell culture media with fetal bovine serum, while high agglomeration occurred in media without fetal bovine serum as measured during 24 h.

  15. Random Distribution Pattern and Non-adaptivity of Genome Size in a Highly Variable Population of Festuca pallens

    PubMed Central

    Šmarda, Petr; Bureš, Petr; Horová, Lucie

    2007-01-01

    Background and Aims The spatial and statistical distribution of genome sizes and the adaptivity of genome size to some types of habitat, vegetation or microclimatic conditions were investigated in a tetraploid population of Festuca pallens. The population was previously documented to vary highly in genome size and is assumed as a model for the study of the initial stages of genome size differentiation. Methods Using DAPI flow cytometry, samples were measured repeatedly with diploid Festuca pallens as the internal standard. Altogether 172 plants from 57 plots (2·25 m2), distributed in contrasting habitats over the whole locality in South Moravia, Czech Republic, were sampled. The differences in DNA content were confirmed by the double peaks of simultaneously measured samples. Key Results At maximum, a 1·115-fold difference in genome size was observed. The statistical distribution of genome sizes was found to be continuous and best fits the extreme (Gumbel) distribution with rare occurrences of extremely large genomes (positive-skewed), as it is similar for the log-normal distribution of the whole Angiosperms. Even plants from the same plot frequently varied considerably in genome size and the spatial distribution of genome sizes was generally random and unautocorrelated (P > 0·05). The observed spatial pattern and the overall lack of correlations of genome size with recognized vegetation types or microclimatic conditions indicate the absence of ecological adaptivity of genome size in the studied population. Conclusions These experimental data on intraspecific genome size variability in Festuca pallens argue for the absence of natural selection and the selective non-significance of genome size in the initial stages of genome size differentiation, and corroborate the current hypothetical model of genome size evolution in Angiosperms (Bennetzen et al., 2005, Annals of Botany 95: 127–132). PMID:17565968

  16. The choice of catecholamines in septic shock: more and more good arguments to strengthen the known position, but don't lose the faith!

    PubMed

    Meier-Hellmann, Andreas

    2006-01-01

    The choice of catecholamines for hemodynamic stabilisation in septic shock patients has been an ongoing debate for several years. Several studies have investigated the regional effects in septic patients. Because of an often very small sample size, because of inconsistent results and because of methodical problems in the monitoring techniques used in these studies, however, it is not possible to provide clear recommendations concerning the use of catecholamines in sepsis. Prospective and adequate-sized studies are necessary because outcome data are completely lacking.

  17. Statistical differences between relative quantitative molecular fingerprints from microbial communities.

    PubMed

    Portillo, M C; Gonzalez, J M

    2008-08-01

    Molecular fingerprints of microbial communities are a common method for the analysis and comparison of environmental samples. The significance of differences between microbial community fingerprints was analyzed considering the presence of different phylotypes and their relative abundance. A method is proposed by simulating coverage of the analyzed communities as a function of sampling size applying a Cramér-von Mises statistic. Comparisons were performed by a Monte Carlo testing procedure. As an example, this procedure was used to compare several sediment samples from freshwater ponds using a relative quantitative PCR-DGGE profiling technique. The method was able to discriminate among different samples based on their molecular fingerprints, and confirmed the lack of differences between aliquots from a single sample.

  18. Ensemble representations: effects of set size and item heterogeneity on average size perception.

    PubMed

    Marchant, Alexander P; Simons, Daniel J; de Fockert, Jan W

    2013-02-01

    Observers can accurately perceive and evaluate the statistical properties of a set of objects, forming what is now known as an ensemble representation. The accuracy and speed with which people can judge the mean size of a set of objects have led to the proposal that ensemble representations of average size can be computed in parallel when attention is distributed across the display. Consistent with this idea, judgments of mean size show little or no decrement in accuracy when the number of objects in the set increases. However, the lack of a set size effect might result from the regularity of the item sizes used in previous studies. Here, we replicate these previous findings, but show that judgments of mean set size become less accurate when set size increases and the heterogeneity of the item sizes increases. This pattern can be explained by assuming that average size judgments are computed using a limited capacity sampling strategy, and it does not necessitate an ensemble representation computed in parallel across all items in a display. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. What about N? A methodological study of sample-size reporting in focus group studies.

    PubMed

    Carlsen, Benedicte; Glenton, Claire

    2011-03-11

    Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Based on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and inconsistent reporting seen in these studies may also reflect the lack of clear, evidence-based guidance about deciding on sample size. More empirical research is needed to develop focus group methodology.

  20. About Cats and Dogs...Reconsidering the Relationship between Pet Ownership and Health Related Outcomes in Community-Dwelling Elderly

    ERIC Educational Resources Information Center

    Rijken, Mieke; van Beek, Sandra

    2011-01-01

    Having a pet has been claimed to have beneficial health effects, but methodologically sound empirical studies are scarce. Small sample sizes and a lack of information about the specific type of pets involved make it difficult to draw unambiguous conclusions. We aimed to shed light on the relationship between pet ownership and several health…

  1. Reduced amygdalar and hippocampal size in adults with generalized social phobia.

    PubMed

    Irle, Eva; Ruhleder, Mirjana; Lange, Claudia; Seidler-Brandler, Ulrich; Salzer, Simone; Dechent, Peter; Weniger, Godehard; Leibing, Eric; Leichsenring, Falk

    2010-03-01

    Structural and functional brain imaging studies suggest abnormalities of the amygdala and hippocampus in posttraumatic stress disorder and major depressive disorder. However, structural brain imaging studies in social phobia are lacking. In total, 24 patients with generalized social phobia (GSP) and 24 healthy controls underwent 3-dimensional structural magnetic resonance imaging of the amygdala and hippocampus and a clinical investigation. Compared with controls, GSP patients had significantly reduced amygdalar (13%) and hippocampal (8%) size. The reduction in the size of the amygdala was statistically significant for men but not women. Smaller right-sided hippocampal volumes of GSP patients were significantly related to stronger disorder severity. Our sample included only patients with the generalized subtype of social phobia. Because we excluded patients with comorbid depression, our sample may not be representative. We report for the first time volumetric results in patients with GSP. Future assessment of these patients will clarify whether these changes are reversed after successful treatment and whether they predict treatment response.

  2. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    USGS Publications Warehouse

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Olin E.; Irwin, Brian J.; Beasley, James

    2016-01-01

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.

  3. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less

  4. Optimization of Scat Detection Methods for a Social Ungulate, the Wild Pig, and Experimental Evaluation of Factors Affecting Detection of Scat.

    PubMed

    Keiter, David A; Cunningham, Fred L; Rhodes, Olin E; Irwin, Brian J; Beasley, James C

    2016-01-01

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.

  5. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    DOE PAGES

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.; ...

    2016-05-25

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less

  6. Autism spectrum disorder and pet therapy.

    PubMed

    Siewertsen, Caitlin M; French, Emma D; Teramoto, Masaru

    2015-01-01

    Autism Spectrum Disorder (ASD) encompasses a wide range of social and mental afflictions that are difficult to treat. Due to a lack of established treatments for ASD, alternative therapies have been the primary form of intervention. One of these alternatives is pet therapy, a field that has experienced growing interest and has recently accumulated studies that investigate its efficacy. This article reviews and summarizes that effectiveness as well as the findings and limitations associated with pet therapy for ASD. The majority of research on ASD and pet therapy has examined children and has primarily used dogs and horses for therapy. Studies have shown positive effects for the therapy, including high satisfaction rates among the participants' families. Major limitations of studies in the current literature include the lack of control groups and small sample sizes. Future research should incorporate better study designs and large samples to validate pet therapy as an appropriate treatment for ASD.

  7. A COMPARISON OF GALAXY COUNTING TECHNIQUES IN SPECTROSCOPICALLY UNDERSAMPLED REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Specian, Mike A.; Szalay, Alex S., E-mail: mspecia1@jhu.edu, E-mail: szalay@jhu.edu

    2016-11-01

    Accurate measures of galactic overdensities are invaluable for precision cosmology. Obtaining these measurements is complicated when members of one’s galaxy sample lack radial depths, most commonly derived via spectroscopic redshifts. In this paper, we utilize the Sloan Digital Sky Survey’s Main Galaxy Sample to compare seven methods of counting galaxies in cells when many of those galaxies lack redshifts. These methods fall into three categories: assigning galaxies discrete redshifts, scaling the numbers counted using regions’ spectroscopic completeness properties, and employing probabilistic techniques. We split spectroscopically undersampled regions into three types—those inside the spectroscopic footprint, those outside but adjacent to it,more » and those distant from it. Through Monte Carlo simulations, we demonstrate that the preferred counting techniques are a function of region type, cell size, and redshift. We conclude by reporting optimal counting strategies under a variety of conditions.« less

  8. Life History and Production of the Western Gray Whale's Prey, Ampelisca eschrichtii Krøyer, 1842 (Amphipoda, Ampeliscidae).

    PubMed

    Demchenko, Natalia L; Chapman, John W; Durkina, Valentina B; Fadeev, Valeriy I

    2016-01-01

    Ampelisca eschrichtii are among the most important prey of the Western North Pacific gray whales, Eschrichtius robustus. The largest and densest known populations of this amphipod occur in the gray whale's Offshore feeding area on the Northeastern Sakhalin Island Shelf. The remote location, ice cover and stormy weather at the Offshore area have prevented winter sampling. The incomplete annual sampling has confounded efforts to resolve life history and production of A. eschrichtii. Expanded comparisons of population size structure and individual reproductive development between late spring and early fall over six sampling years between 2002 and 2013 however, reveal that A. eschrichtii are gonochoristic, iteroparous, mature at body lengths greater than 15 mm and have a two-year life span. The low frequencies of brooding females, the lack of early stage juveniles, the lack of individual or population growth or biomass increases over late spring and summer, all indicate that growth and reproduction occur primarily in winter, when sampling does not occur. Distinct juvenile and adult size cohorts additionally indicate growth and juvenile production occurs in winter through spring under ice cover. Winter growth thus requires that winter detritus or primary production are critical food sources for these ampeliscid populations and yet, the Offshore area and the Eastern Sakhalin Shelf ampeliscid communities may be the most abundant and productive amphipod population in the world. These A. eschrichtii populations are unlikely to be limited by western gray whale predation. Whether benthic community structure can limit access and foraging success of western gray whales is unclear.

  9. A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data

    PubMed Central

    Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming

    2018-01-01

    The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks. PMID:29706880

  10. A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data.

    PubMed

    Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming

    2018-01-01

    The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks.

  11. Bone Marrow Stem Cells and Ear Framework Reconstruction.

    PubMed

    Karimi, Hamid; Emami, Seyed-Abolhassan; Olad-Gubad, Mohammad-Kazem

    2016-11-01

    Repair of total human ear loss or congenital lack of ears is one of the challenging issues in plastic and reconstructive surgery. The aim of the present study was 3D reconstruction of the human ear with cadaveric ear cartilages seeded with human mesenchymal stem cells. We used cadaveric ear cartilages with preserved perichondrium. The samples were divided into 2 groups: group A (cartilage alone) and group B (cartilage seeded with a mixture of fibrin powder and mesenchymal stem cell [1,000,000 cells/cm] used and implanted in back of 10 athymic rats). After 12 weeks, the cartilages were removed and shape, size, weight, flexibility, and chondrocyte viability were evaluated. P value <0.05 was considered significant. In group A, size and weight of cartilages clearly reduced (P < 0.05) and then shape and flexibility (torsion of cartilages in clockwise and counterclockwise directions) were evaluated, which were found to be significantly reduced (P > 0.05). After staining with hematoxylin and eosin and performing microscopic examination, very few live chondrocytes were found in group A. In group B, size and weight of samples were not changed (P < 0.05); the shape and flexibility of samples were well maintained (P < 0.05) and on performing microscopic examination of cartilage samples, many live chondrocytes were found in cartilage (15-20 chondrocytes in each microscopic field). In samples with human stem cell, all variables (size, shape, weight, and flexibility) were significantly maintained and abundant live chondrocytes were found on performing microscopic examination. This method may be used for reconstruction of full defect of auricles in humans.

  12. Gaps in Survey Data on Cancer in American Indian and Alaska Native Populations: Examination of US Population Surveys, 1960–2010

    PubMed Central

    Duran, Tinka; Stimpson, Jim P.; Smith, Corey

    2013-01-01

    Introduction Population-based data are essential for quantifying the problems and measuring the progress made by comprehensive cancer control programs. However, cancer information specific to the American Indian/Alaska Native (AI/AN) population is not readily available. We identified major population-based surveys conducted in the United States that contain questions related to cancer, documented the AI/AN sample size in these surveys, and identified gaps in the types of cancer-related information these surveys collect. Methods We conducted an Internet query of US Department of Health and Human Services agency websites and a Medline search to identify population-based surveys conducted in the United States from 1960 through 2010 that contained information about cancer. We used a data extraction form to collect information about the purpose, sample size, data collection methods, and type of information covered in the surveys. Results Seventeen survey sources met the inclusion criteria. Information on access to and use of cancer treatment, follow-up care, and barriers to receiving timely and quality care was not consistently collected. Estimates specific to the AI/AN population were often lacking because of inadequate AI/AN sample size. For example, 9 national surveys reviewed reported an AI/AN sample size smaller than 500, and 10 had an AI/AN sample percentage less than 1.5%. Conclusion Continued efforts are needed to increase the overall number of AI/AN participants in these surveys, improve the quality of information on racial/ethnic background, and collect more information on treatment and survivorship. PMID:23517582

  13. A Civilian/Military Trauma Institute: National Trauma Coordinating Center

    DTIC Science & Technology

    2015-12-01

    zip codes was used in “proximity to violence” analysis. Data were analyzed using SPSS (version 20.0, SPSS Inc., Chicago, IL). Multivariable linear...number of adverse events and serious events was not statistically higher in one group, the incidence of deep venous thrombosis (DVT) was statistically ...subjects the lack of statistical difference on multivariate analysis may be related to an underpowered sample size. It was recommended that the

  14. Improving size estimates of open animal populations by incorporating information on age

    USGS Publications Warehouse

    Manly, Bryan F.J.; McDonald, Trent L.; Amstrup, Steven C.; Regehr, Eric V.

    2003-01-01

    Around the world, a great deal of effort is expended each year to estimate the sizes of wild animal populations. Unfortunately, population size has proven to be one of the most intractable parameters to estimate. The capture-recapture estimation models most commonly used (of the Jolly-Seber type) are complicated and require numerous, sometimes questionable, assumptions. The derived estimates usually have large variances and lack consistency over time. In capture–recapture studies of long-lived animals, the ages of captured animals can often be determined with great accuracy and relative ease. We show how to incorporate age information into size estimates for open populations, where the size changes through births, deaths, immigration, and emigration. The proposed method allows more precise estimates of population size than the usual models, and it can provide these estimates from two sample occasions rather than the three usually required. Moreover, this method does not require specialized programs for capture-recapture data; researchers can derive their estimates using the logistic regression module in any standard statistical package.

  15. Size-Resolved Composition of Organic Aerosol on the California Central Coast

    NASA Astrophysics Data System (ADS)

    Babila, J. E.; Depew, C. J.; Heinrich, S. E.; Zoerb, M.

    2016-12-01

    Organic compounds represent a significant mass fraction of submicrometer aerosol and can influence properties such as radiative forcing and cloud formation. Despite their broad importance, a complete description of particle sources and composition is lacking. Here we present measurements of solvent-extracted organic compounds in ambient aerosol in San Luis Obispo, CA. Ambient particles were sampled and size segregated with a micro-orifice uniform deposit impactor (MOUDI). Water and methanol soluble organic carbon was characterized with electrospray ionization mass spectrometry (ESI-MS) and UV/Vis spectroscopy. Particle composition and influences from local and regional sources on the organic fraction will be discussed.

  16. Bacterial Presence in Layered Rock Varnish-Possible Mars Analog?

    NASA Astrophysics Data System (ADS)

    Krinsley, D.; Rusk, B. G.

    2000-08-01

    Rock varnish from locations in Death Valley, California; Peru; Antarctica; and Hawaii reveal nanometer scale layering (less than 1 nm to about 75 nm) when studied with transmission electron microscopy (TEM). Parallel layers of clay minerals containing evidence of presumed bacteria were present in all samples. Samples range in age from a few thousand years to perhaps a million years. Diagenesis is relatively limited, as chemical composition is variable, both from top to bottom and along layers in these varnish samples. Also, occasional exotic minerals occur randomly in most varnish sections, and vary in size and hardness, again suggesting relative lack of diagenetic alteration. Additional information can be found in the original extended abstract.

  17. Program factors related to women's substance abuse treatment retention and other outcomes: a review and critique.

    PubMed

    Sun, An-Pyng

    2006-01-01

    This study examined program factors related to women's substance abuse treatment outcomes. Although substance abuse research is traditionally focused on men, some more recent studies target women. A systematic review of 35 empirical studies that included solely women subjects or that analyzed female subjects separately from male subjects revealed five elements related to women's substance abuse treatment effectiveness; these are (1) single- versus mixed-sex programs, (2) treatment intensity, (3) provision for child care, (4) case management and the "one-stop shopping" model, and (5) supportive staff plus the offering of individual counseling. Although all 35 studies contribute to the knowledge base, critiques of six areas of design weakness in the studies were included to provide directions for future studies; these are (1) lack of a randomized controlled design, (2) nondisentanglement of multiple conditions, (3) lack of a consistent definition for treatment factors and outcomes, (4) small sample size, (5) lack of thorough program description, and (6) lack of thorough statistical analyses.

  18. Dry particle generation with a 3-D printed fluidized bed generator

    DOE PAGES

    Roesch, Michael; Roesch, Carolin; Cziczo, Daniel J.

    2017-06-02

    We describe the design and testing of PRIZE (PRinted fluidIZed bed gEnerator), a compact fluidized bed aerosol generator manufactured using stereolithography (SLA) printing. Dispersing small quantities of powdered materials – due to either rarity or expense – is challenging due to a lack of small, low-cost dry aerosol generators. With this as motivation, we designed and built a generator that uses a mineral dust or other dry powder sample mixed with bronze beads that sit atop a porous screen. A particle-free airflow is introduced, dispersing the sample as airborne particles. The total particle number concentrations and size distributions were measured duringmore » different stages of the assembling process to show that the SLA 3-D printed generator did not generate particles until the mineral dust sample was introduced. Furthermore, time-series measurements with Arizona Test Dust (ATD) showed stable total particle number concentrations of 10–150 cm -3, depending on the sample mass, from the sub- to super-micrometer size range. Additional tests with collected soil dust samples are also presented. PRIZE is simple to assemble, easy to clean, inexpensive and deployable for laboratory and field studies that require dry particle generation.« less

  19. Dry particle generation with a 3-D printed fluidized bed generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roesch, Michael; Roesch, Carolin; Cziczo, Daniel J.

    We describe the design and testing of PRIZE (PRinted fluidIZed bed gEnerator), a compact fluidized bed aerosol generator manufactured using stereolithography (SLA) printing. Dispersing small quantities of powdered materials – due to either rarity or expense – is challenging due to a lack of small, low-cost dry aerosol generators. With this as motivation, we designed and built a generator that uses a mineral dust or other dry powder sample mixed with bronze beads that sit atop a porous screen. A particle-free airflow is introduced, dispersing the sample as airborne particles. The total particle number concentrations and size distributions were measured duringmore » different stages of the assembling process to show that the SLA 3-D printed generator did not generate particles until the mineral dust sample was introduced. Furthermore, time-series measurements with Arizona Test Dust (ATD) showed stable total particle number concentrations of 10–150 cm -3, depending on the sample mass, from the sub- to super-micrometer size range. Additional tests with collected soil dust samples are also presented. PRIZE is simple to assemble, easy to clean, inexpensive and deployable for laboratory and field studies that require dry particle generation.« less

  20. Context Matters: Volunteer Bias, Small Sample Size, and the Value of Comparison Groups in the Assessment of Research-Based Undergraduate Introductory Biology Lab Courses

    PubMed Central

    Brownell, Sara E.; Kloser, Matthew J.; Fukami, Tadashi; Shavelson, Richard J.

    2013-01-01

    The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course. PMID:24358380

  1. Context matters: volunteer bias, small sample size, and the value of comparison groups in the assessment of research-based undergraduate introductory biology lab courses.

    PubMed

    Brownell, Sara E; Kloser, Matthew J; Fukami, Tadashi; Shavelson, Richard J

    2013-01-01

    The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course.

  2. Development of composite calibration standard for quantitative NDE by ultrasound and thermography

    NASA Astrophysics Data System (ADS)

    Dayal, Vinay; Benedict, Zach G.; Bhatnagar, Nishtha; Harper, Adam G.

    2018-04-01

    Inspection of aircraft components for damage utilizing ultrasonic Non-Destructive Evaluation (NDE) is a time intensive endeavor. Additional time spent during aircraft inspections translates to added cost to the company performing them, and as such, reducing this expenditure is of great importance. There is also great variance in the calibration samples from one entity to another due to a lack of a common calibration set. By characterizing damage types, we can condense the required calibration sets and reduce the time required to perform calibration while also providing procedures for the fabrication of these standard sets. We present here our effort to fabricate composite samples with known defects and quantify the size and location of defects, such as delaminations, and impact damage. Ultrasonic and Thermographic images are digitally enhanced to accurately measure the damage size. Ultrasonic NDE is compared with thermography.

  3. A framework for inference about carnivore density from unstructured spatial sampling of scat using detector dogs

    USGS Publications Warehouse

    Thompson, Craig M.; Royle, J. Andrew; Garner, James D.

    2012-01-01

    Wildlife management often hinges upon an accurate assessment of population density. Although undeniably useful, many of the traditional approaches to density estimation such as visual counts, livetrapping, or mark–recapture suffer from a suite of methodological and analytical weaknesses. Rare, secretive, or highly mobile species exacerbate these problems through the reality of small sample sizes and movement on and off study sites. In response to these difficulties, there is growing interest in the use of non-invasive survey techniques, which provide the opportunity to collect larger samples with minimal increases in effort, as well as the application of analytical frameworks that are not reliant on large sample size arguments. One promising survey technique, the use of scat detecting dogs, offers a greatly enhanced probability of detection while at the same time generating new difficulties with respect to non-standard survey routes, variable search intensity, and the lack of a fixed survey point for characterizing non-detection. In order to account for these issues, we modified an existing spatially explicit, capture–recapture model for camera trap data to account for variable search intensity and the lack of fixed, georeferenced trap locations. We applied this modified model to a fisher (Martes pennanti) dataset from the Sierra National Forest, California, and compared the results (12.3 fishers/100 km2) to more traditional density estimates. We then evaluated model performance using simulations at 3 levels of population density. Simulation results indicated that estimates based on the posterior mode were relatively unbiased. We believe that this approach provides a flexible analytical framework for reconciling the inconsistencies between detector dog survey data and density estimation procedures.

  4. THE CASE FOR A TYPHOID VACCINE PROBE STUDY AND OVERVIEW OF DESIGN ELEMENTS

    PubMed Central

    Halloran, M. Elizabeth; Khan, Imran

    2015-01-01

    Recent advances in typhoid vaccine, and consideration of support from Gavi, the Vaccine Alliance, raise the possibility that some endemic countries will introduce typhoid vaccine into public immunization programs. This decision, however, is limited by lack of definitive information on disease burden. We propose use of a vaccine probe study approach. This approach would more clearly assess the total burden of typhoid across different syndromic groups and account for lack of access to care, poor diagnostics, incomplete laboratory testing, lack of mortality and intestinal perforation surveillance, and increasing antibiotic resistance. We propose a cluster randomized trial design using a mass immunization campaign among all age groups, with monitoring over a 4-year period of a variety of outcomes. The primary outcome would be the vaccine preventable disease incidence of prolonged fever hospitalization. Sample size calculations suggest that such a study would be feasible over a reasonable set of assumptions. PMID:25912286

  5. Porosity characterization for heterogeneous shales using integrated multiscale microscopy

    NASA Astrophysics Data System (ADS)

    Rassouli, F.; Andrew, M.; Zoback, M. D.

    2016-12-01

    Pore size distribution analysis plays a critical role in gas storage capacity and fluid transport characterization of shales. Study of the diverse distribution of pore size and structure in such low permeably rocks is withheld by the lack of tools to visualize the microstructural properties of shale rocks. In this paper we try to use multiple techniques to investigate the full pore size range in different sample scales. Modern imaging techniques are combined with routine analytical investigations (x-ray diffraction, thin section analysis and mercury porosimetry) to describe pore size distribution of shale samples from Haynesville formation in East Texas to generate a more holistic understanding of the porosity structure in shales, ranging from standard core plug down to nm scales. Standard 1" diameter core plug samples were first imaged using a Versa 3D x-ray microscope at lower resolutions. Then we pick several regions of interest (ROIs) with various micro-features (such as micro-cracks and high organic matters) in the rock samples to run higher resolution CT scans using a non-destructive interior tomography scans. After this step, we cut the samples and drill 5 mm diameter cores out of the selected ROIs. Then we rescan the samples to measure porosity distribution of the 5 mm cores. We repeat this step for samples with diameter of 1 mm being cut out of the 5 mm cores using a laser cutting machine. After comparing the pore structure and distribution of the samples measured form micro-CT analysis, we move to nano-scale imaging to capture the ultra-fine pores within the shale samples. At this stage, the diameter of the 1 mm samples will be milled down to 70 microns using the laser beam. We scan these samples in a nano-CT Ultra x-ray microscope and calculate the porosity of the samples by image segmentation methods. Finally, we use images collected from focused ion beam scanning electron microscopy (FIB-SEM) to be able to compare the results of porosity measurements from all different imaging techniques. These multi-scale characterization techniques are then compared with traditional analytical techniques such as Mercury Porosimetry.

  6. Superficial and deep changes of histology, texture and particle size distribution in broiler wooden breast muscle during refrigerated storage.

    PubMed

    Soglia, Francesca; Gao, Jingxian; Mazzoni, Maurizio; Puolanne, Eero; Cavani, Claudio; Petracci, Massimiliano; Ertbjerg, Per

    2017-09-01

    Recently the poultry industry faced an emerging muscle abnormality termed wooden breast (WB), the prevalence of which has dramatically increased in the past few years. Considering the incomplete knowledge concerning this condition and the lack of information on possible variations due to the intra-fillet sampling locations (superficial vs. deep position) and aging of the samples, this study aimed at investigating the effect of 7-d storage of broiler breast muscles on histology, texture, and particle size distribution, evaluating whether the sampling position exerts a relevant role in determining the main features of WB. With regard to the histological observations, severe myodegeneration accompanied by accumulation of connective tissue was observed within the WB cases, irrespective of the intra-fillet sampling position. No changes in the histological traits took place during the aging in either the normal or the WB samples. As to textural traits, although a progressive tenderization process took place during storage (P ≤ 0.001), the differences among the groups were mainly detected when raw meat rather than cooked was analyzed, with the WB samples exhibiting the highest (P ≤ 0.001) 80% compression values. In spite of the increased amount of connective tissue components in the WB cases, their thermally labile cross-links will account for the similar compression and shear-force values as normal breast cases when measured on cooked samples. Similarly, the enlargement of extracellular matrix and fibrosis might contribute in explaining the different fragmentation patterns observed between the superficial and the deep layer in the WB samples, with the superficial part exhibiting a higher amount of larger particles and an increase in particles with larger size during storage, compared to normal breasts. © 2017 Poultry Science Association Inc.

  7. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    PubMed

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  8. A comparison of microscopic and spectroscopic identification methods for analysis of microplastics in environmental samples.

    PubMed

    Song, Young Kyoung; Hong, Sang Hee; Jang, Mi; Han, Gi Myung; Rani, Manviri; Lee, Jongmyoung; Shim, Won Joon

    2015-04-15

    The analysis of microplastics in various environmental samples requires the identification of microplastics from natural materials. The identification technique lacks a standardized protocol. Herein, stereomicroscope and Fourier transform infrared spectroscope (FT-IR) identification methods for microplastics (<1mm) were compared using the same samples from the sea surface microlayer (SML) and beach sand. Fragmented microplastics were significantly (p<0.05) underestimated and fiber was significantly overestimated using the stereomicroscope both in the SML and beach samples. The total abundance by FT-IR was higher than by microscope both in the SML and beach samples, but they were not significantly (p>0.05) different. Depending on the number of samples and the microplastic size range of interest, the appropriate identification method should be determined; selecting a suitable identification method for microplastics is crucial for evaluating microplastic pollution. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    PubMed Central

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  10. Nationwide Databases in Orthopaedic Surgery Research.

    PubMed

    Bohl, Daniel D; Singh, Kern; Grauer, Jonathan N

    2016-10-01

    The use of nationwide databases to conduct orthopaedic research has expanded markedly in recent years. Nationwide databases offer large sample sizes, sampling of patients who are representative of the country as a whole, and data that enable investigation of trends over time. The most common use of nationwide databases is to study the occurrence of postoperative adverse events. Other uses include the analysis of costs and the investigation of critical hospital metrics, such as length of stay and readmission rates. Although nationwide databases are powerful research tools, readers should be aware of the differences between them and their limitations. These include variations and potential inaccuracies in data collection, imperfections in patient sampling, insufficient postoperative follow-up, and lack of orthopaedic-specific outcomes.

  11. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    PubMed

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  12. A comparison of defect size and film quality obtained from Film digitized image and digital image radiographs

    NASA Astrophysics Data System (ADS)

    Kamlangkeng, Poramate; Asa, Prateepasen; Mai, Noipitak

    2014-06-01

    Digital radiographic testing is an acceptable premature nondestructive examination technique. Its performance and limitation comparing to the old technique are still not widely well known. In this paper conducted the study on the comparison of the accuracy of the defect size measurement and film quality obtained from film and digital radiograph techniques by testing in specimens and known size sample defect. Initially, one specimen was built with three types of internal defect; which are longitudinal cracking, lack of fusion, and porosity. For the known size sample defect, it was machined various geometrical size for comparing the accuracy of the measuring defect size to the real size in both film and digital images. To compare the image quality by considering at smallest detectable wire and the three defect images. In this research used Image Quality Indicator (IQI) of wire type 10/16 FE EN BS EN-462-1-1994. The radiographic films were produced by X-ray and gamma ray using Kodak AA400 size 3.5x8 inches, while the digital images were produced by Fuji image plate type ST-VI with 100 micrometers resolution. During the tests, a radiator GE model MF3 was implemented. The applied energy is varied from 120 to 220 kV and the current from 1.2 to 3.0 mA. The intensity of Iridium 192 gamma ray is in the range of 24-25 Curie. Under the mentioned conditions, the results showed that the deviation of the defect size measurement comparing to the real size obtained from the digital image radiographs is below than that of the film digitized, whereas the quality of film digitizer radiographs is higher in comparison.

  13. Strategic assessment of the availability of pediatric trauma care equipment, technology and supplies in Ghana.

    PubMed

    Ankomah, James; Stewart, Barclay T; Oppong-Nketia, Victor; Koranteng, Adofo; Gyedu, Adam; Quansah, Robert; Donkor, Peter; Abantanga, Francis; Mock, Charles

    2015-11-01

    This study aimed to assess the availability of pediatric trauma care items (i.e. equipment, supplies, technology) and factors contributing to deficiencies in Ghana. Ten universal and 9 pediatric-sized items were selected from the World Health Organization's Guidelines for Essential Trauma Care. Direct inspection and structured interviews with administrative, clinical and biomedical engineering staff were used to assess item availability at 40 purposively sampled district, regional and tertiary hospitals in Ghana. Hospital assessments demonstrated marked deficiencies for a number of essential items (e.g. basic airway supplies, chest tubes, blood pressure cuffs, electrolyte determination, portable X-ray). Lack of pediatric-sized items resulting from equipment absence, lack of training, frequent stock-outs and technology breakage were common. Pediatric items were consistently less available than adult-sized items at each hospital level. This study identified several successes and problems with pediatric trauma care item availability in Ghana. Item availability could be improved, both affordably and reliably, by better organization and planning (e.g. regular assessment of demand and inventory, reliable financing for essential trauma care items). In addition, technology items were often broken. Developing local service and biomedical engineering capability was highlighted as a priority to avoid long periods of equipment breakage. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Strategic assessment of the availability of pediatric trauma care equipment, technology and supplies in Ghana

    PubMed Central

    Ankomah, James; Stewart, Barclay T; Oppong-Nketia, Victor; Koranteng, Adofo; Gyedu, Adam; Quansah, Robert; Donkor, Peter; Abantanga, Francis; Mock, Charles

    2015-01-01

    Background This study aimed to assess the availability of pediatric trauma care items (i.e. equipment, supplies, technology) and factors contributing to deficiencies in Ghana. Methods Ten universal and 9 pediatric-sized items were selected from the World Health Organization’s Guidelines for Essential Trauma Care. Direct inspection and structured interviews with administrative, clinical and biomedical engineering staff were used to assess item availability at 40 purposively sampled district, regional and tertiary hospitals in Ghana. Results Hospital assessments demonstrated marked deficiencies for a number of essential items (e.g. basic airway supplies, chest tubes, blood pressure cuffs, electrolyte determination, portable Xray). Lack of pediatric-sized items resulting from equipment absence, lack of training, frequent stock-outs and technology breakage were common. Pediatric items were consistently less available than adult-sized items at each hospital level. Conclusion This study identified several successes and problems with pediatric trauma care item availability in Ghana. Item availability could be improved, both affordably and reliably, by better organization and planning (e.g. regular assessment of demand and inventory, reliable financing for essential trauma care items). In addition, technology items were often broken. Developing local service and biomedical engineering capability was highlighted as a priority to avoid long periods of equipment breakage. PMID:25841284

  15. Practical sampling plans for Varroa destructor (Acari: Varroidae) in Apis mellifera (Hymenoptera: Apidae) colonies and apiaries.

    PubMed

    Lee, K V; Moon, R D; Burkness, E C; Hutchison, W D; Spivak, M

    2010-08-01

    The parasitic mite Varroa destructor Anderson & Trueman (Acari: Varroidae) is arguably the most detrimental pest of the European-derived honey bee, Apis mellifera L. Unfortunately, beekeepers lack a standardized sampling plan to make informed treatment decisions. Based on data from 31 commercial apiaries, we developed sampling plans for use by beekeepers and researchers to estimate the density of mites in individual colonies or whole apiaries. Beekeepers can estimate a colony's mite density with chosen level of precision by dislodging mites from approximately to 300 adult bees taken from one brood box frame in the colony, and they can extrapolate to mite density on a colony's adults and pupae combined by doubling the number of mites on adults. For sampling whole apiaries, beekeepers can repeat the process in each of n = 8 colonies, regardless of apiary size. Researchers desiring greater precision can estimate mite density in an individual colony by examining three, 300-bee sample units. Extrapolation to density on adults and pupae may require independent estimates of numbers of adults, of pupae, and of their respective mite densities. Researchers can estimate apiary-level mite density by taking one 300-bee sample unit per colony, but should do so from a variable number of colonies, depending on apiary size. These practical sampling plans will allow beekeepers and researchers to quantify mite infestation levels and enhance understanding and management of V. destructor.

  16. Challenges and solutions for the analysis of in situ , in crystallo micro-spectrophotometric data

    DOE PAGES

    Dworkowski, Florian S. N.; Hough, Michael A.; Pompidor, Guillaume; ...

    2015-01-01

    Combining macromolecular crystallography with in crystallo micro-spectrophotometry yields valuable complementary information on the sample, including the redox states of metal cofactors, the identification of bound ligands and the onset and strength of undesired photochemistry, also known as radiation damage. However, the analysis and processing of the resulting data differs significantly from the approaches used for solution spectrophotometric data. The varying size and shape of the sample, together with the suboptimal sample environment, the lack of proper reference signals and the general influence of the X-ray beam on the sample have to be considered and carefully corrected for. In the presentmore » article, we discuss how to characterize and treat these sample-dependent artefacts in a reproducible manner and we demonstrate the SLS-APE in situ, in crystallo optical spectroscopy data-analysis toolbox.« less

  17. Multi-passes warm rolling of AZ31 magnesium alloy, effect on evaluation of texture, microstructure, grain size and hardness

    NASA Astrophysics Data System (ADS)

    Kamran, J.; Hasan, B. A.; Tariq, N. H.; Izhar, S.; Sarwar, M.

    2014-06-01

    In this study the effect of multi-passes warm rolling of AZ31 magnesium alloy on texture, microstructure, grain size variation and hardness of as cast sample (A) and two rolled samples (B & C) taken from different locations of the as-cast ingot was investigated. The purpose was to enhance the formability of AZ31 alloy in order to help manufacturability. It was observed that multi-passes warm rolling (250°C to 350°C) of samples B & C with initial thickness 7.76mm and 7.73 mm was successfully achieved up to 85% reduction without any edge or surface cracks in ten steps with a total of 26 passes. The step numbers 1 to 4 consist of 5, 2, 11 and 3 passes respectively, the remaining steps 5 to 10 were single pass rolls. In each discrete step a fixed roll gap is used in a way that true strain per step increases very slowly from 0.0067 in the first step to 0.7118 in the 26th step. Both samples B & C showed very similar behavior after 26th pass and were successfully rolled up to 85% thickness reduction. However, during 10th step (27th pass) with a true strain value of 0.772 the sample B experienced very severe surface as well as edge cracks. Sample C was therefore not rolled for the 10th step and retained after 26 passes. Both samples were studied in terms of their basal texture, microstructure, grain size and hardness. Sample C showed an equiaxed grain structure after 85% total reduction. The equiaxed grain structure of sample C may be due to the effective involvement of dynamic recrystallization (DRX) which led to formation of these grains with relatively low misorientations with respect to the parent as cast grains. The sample B on the other hand showed a microstructure in which all the grains were elongated along the rolling direction (RD) after 90 % total reduction and DRX could not effectively play its role due to heavy strain and lack of plastic deformation systems. The microstructure of as cast sample showed a near-random texture (mrd 4.3), with average grain size of 44 & micro-hardness of 52 Hv. The grain size of sample B and C was 14μm and 27μm respectively and mrd intensity of basal texture was 5.34 and 5.46 respectively. The hardness of sample B and C came out to be 91 and 66 Hv respectively due to reduction in grain size and followed the well known Hall-Petch relationship.

  18. Assessment of eHealth capabilities and utilization in residential care settings.

    PubMed

    Towne, Samuel D; Lee, Shinduk; Li, Yajuan; Smith, Matthew Lee

    2016-12-01

    The US National Survey of Residential Care Facilities was used to conduct cross-sectional analyses of residential care facilities (n = 2302). Most residential care facilities lacked computerized capabilities for one or more of these capabilities in 2010. Lacking computerized systems supporting electronic health information exchange with pharmacies was associated with non-chain affiliation (p < .05). Lacking electronic health information exchange with physicians was associated with being a small-sized facility (vs large) (p < .05). Lacking computerized capabilities for discharge/transfer summaries was associated with for-profit status (p < .05) and small-sized facilities (p < .05). Lacking computerized capabilities for medical provider information was associated with non-chain affiliation (p < .05), small- or medium-sized facilities (p < .05), and for-profit status (p < .05). Lack of electronic health record was associated with non-chain affiliation (p < .05), small- or medium-sized facilities (p < .05), for-profit status (p < .05), and location in urban areas (p < .05). eHealth disparities exist across residential care facilities. As the older adult population continues to grow, resources must be in place to provide an integrated system of care across multiple settings. © The Author(s) 2015.

  19. Investigating textural controls on Archie's porosity exponent using process-based, pore-scale modelling

    NASA Astrophysics Data System (ADS)

    Niu, Q.; Zhang, C.

    2017-12-01

    Archie's law is an important empirical relationship linking the electrical resistivity of geological materials to their porosity. It has been found experimentally that the porosity exponent m in Archie's law in sedimentary rocks might be related to the degree of cementation, and therefore m is termed as "cementation factor" in most literatures. Despite it has been known for many years, there is lack of well-accepted physical interpretations of the porosity exponent. Some theoretical and experimental evidences have also shown that m may be controlled by the particle and/or pore shape. In this study, we conduct a pore-scale modeling of the porosity exponent that incorporates different geological processes. The evolution of m of eight synthetic samples with different particle sizes and shapes are calculated during two geological processes, i.e., compaction and cementation. The numerical results show that in dilute conditions, m is controlled by the particle shape. As the samples deviate from dilute conditions, m increases gradually due to the strong interaction between particles. When the samples are at static equilibrium, m is noticeably larger than its values at dilution condition. The numerical simulation results also show that both geological compaction and cementation induce a significant increase in m. In addition, the geometric characteristics of these samples (e.g., pore space/throat size, and their distributions) during compaction and cementation are also calculated. Preliminary analysis shows a unique correlation between the pore size broadness and porosity exponent for all eight samples. However, such a correlation is not found between m and other geometric characteristics.

  20. Characterization of marine aerosol for assessment of human exposure to brevetoxins.

    PubMed

    Cheng, Yung Sung; Zhou, Yue; Irvin, Clinton M; Pierce, Richard H; Naar, Jerome; Backer, Lorraine C; Fleming, Lora E; Kirkpatrick, Barbara; Baden, Dan G

    2005-05-01

    Red tides in the Gulf of Mexico are commonly formed by the fish-killing dinoflagellate Karenia brevis, which produces nine potent polyether brevetoxins (PbTxs). Brevetoxins can be transferred from water to air in wind-powered white-capped waves. Inhalation exposure to marine aerosol containing brevetoxins causes respiratory symptoms. We describe detailed characterization of aerosols during an epidemiologic study of occupational exposure to Florida red tide aerosol in terms of its concentration, toxin profile, and particle size distribution. This information is essential in understanding its source, assessing exposure to people, and estimating dose of inhaled aerosols. Environmental sampling confirmed the presence of brevetoxins in water and air during a red tide exposure period (September 2001) and lack of significant toxin levels in the water and air during an unexposed period (May 2002). Water samples collected during a red tide bloom in 2001 showed moderate-to-high concentrations of K. brevis cells and PbTxs. The daily mean PbTx concentration in water samples ranged from 8 to 28 microg/L from 7 to 11 September 2001; the daily mean PbTx concentration in air samples ranged from 1.3 to 27 ng/m(3). The daily aerosol concentration on the beach can be related to PbTx concentration in water, wind speed, and wind direction. Personal samples confirmed human exposure to red tide aerosols. The particle size distribution showed a mean aerodynamic diameter in the size range of 6-12 microm, with deposits mainly in the upper airways. The deposition pattern correlated with the observed increase of upper airway symptoms in healthy lifeguards during the exposure periods.

  1. Space-Weathering on Mercury: Inferences Based on Comparison of MESSENGER Spectral Data and Experimental Space Weathering Data

    NASA Astrophysics Data System (ADS)

    Gillis-Davis, J. J.; Blewett, D. T.; Lawrence, D. J.; Izenberg, N. R.; McClintock, W. E.; Holsclaw, G. M.; Domingue, D. L.

    2009-12-01

    Production and accumulation of submicroscopic metallic iron (SMFe) is a principal mechanism by which surfaces of airless silicate bodies in the Solar System, exposed to the space weathering environment, experience spectral modification. Micrometeorite impact vaporization and solar-wind sputtering produce coatings of vapor-deposited SMFe. Both processes can be more intense on Mercury and, as a result, more efficient at creating melt and vapor. In addition, Ostwald ripening may cause SMFe particles to grow larger due to the high surface temperatures on Mercury (as great as 450°C). Spectral effects on the ultraviolet-visible-near-infrared continuum change with the amount and size of SMFe present. Thus, the physical properties and abundance of iron in Mercury’s regolith can be understood by comparing spectral data from controlled space-weathering experiments with spectra from MESSENGER’s Mercury Atmospheric and Surface Composition Spectrometer (MASCS). Knowledge of SMFe size and abundance may provide information on the space weathering conditions under which it was produced or subsequently modified. Reflectance spectra of laboratory-produced samples with varying SMFe grain sizes (average grain sizes of 8, 15, 35, and 40 nm) and iron compositions (from 0.005 to 3.8 wt% Fe as SMFe) are compared with MASCS disk-integrated reflectance from the first flyby of Mercury and will be compared with observations of spectral end members targeted for the third flyby. We compare spectra from 300 nm to 1400 nm wavelength, scaled to 1 at 700 nm, from the laboratory and MASCS. This comparison between laboratory and remote-sensing spectra reveals an excellent match with observations of Mercury for samples with an average iron metal grain size of 8 nm and 1.65 wt% FeO and 15 nm and 0.13 wt% Fe. These average grain sizes of the SMFe component are larger than the average grain size determined for lunar soil samples using transmission electron microscopy (3 nm in rims and 10-15 nm in agglutinates) but are smaller than values obtained from lunar spectra with the methods used here (15-25 nm). We can also infer that silicates in Mercury's high reflectance plains are potentially iron poor, precluding thick vapor deposits coating - both spectral data sets lack a 1-μm absorption and the experimental iron particles are suspended in an iron-free silica gel. Thus, our conclusion on the basis of spectral comparison is that SMFe on Mercury is potentially smaller than on the Moon and that Ostwald ripening is not a major influence on the surface of Mercury. The absence of pronounced darkening of the equatorial regions of Mercury in images from Mariner 10 and MESSENGER's Mercury Dual Imaging System supports also suggest an apparent lack of Ostwald ripening.

  2. The Effects of Atmosphere on the Sintering of Ultrafine-Grained Tungsten with Ti

    NASA Astrophysics Data System (ADS)

    Ren, Chai; Koopman, Mark; Fang, Z. Zak; Zhang, Huan

    2016-11-01

    Tungsten (W) is a brittle material at room temperature making it very difficult to fabricate. Although the lack of ductility remains a difficult challenge, nano-sized and ultrafine-grained (UFG) structures offer the potential to overcome tungsten's room-temperature brittleness. One way to manufacture UFG W is to compact and sinter nano-sized W powder. It is challenging, however, to control grain growth during sintering. As one method to inhibit grain growth, the effect of Ti-based additives on the densification and grain growth of nano-W powders was investigated in this study. Addition of 1% Ti into tungsten led to more than a 63% decrease in average grain size of sintered samples at comparable density levels. It was found that sintering in Ar yielded a finer grain size than sintering in H2 at similar densities. The active diffusion mechanisms during sintering were different for W-1% Ti nano powders sintered in Ar and H2.

  3. Estimation of tiger densities in the tropical dry forests of Panna, Central India, using photographic capture-recapture sampling

    USGS Publications Warehouse

    Karanth, K.Ullas; Chundawat, Raghunandan S.; Nichols, James D.; Kumar, N. Samba

    2004-01-01

    Tropical dry-deciduous forests comprise more than 45% of the tiger (Panthera tigris) habitat in India. However, in the absence of rigorously derived estimates of ecological densities of tigers in dry forests, critical baseline data for managing tiger populations are lacking. In this study tiger densities were estimated using photographic capture–recapture sampling in the dry forests of Panna Tiger Reserve in Central India. Over a 45-day survey period, 60 camera trap sites were sampled in a well-protected part of the 542-km2 reserve during 2002. A total sampling effort of 914 camera-trap-days yielded photo-captures of 11 individual tigers over 15 sampling occasions that effectively covered a 418-km2 area. The closed capture–recapture model Mh, which incorporates individual heterogeneity in capture probabilities, fitted these photographic capture history data well. The estimated capture probability/sample, p̂= 0.04, resulted in an estimated tiger population size and standard error (N̂(SÊN̂)) of 29 (9.65), and a density (D̂(SÊD̂)) of 6.94 (3.23) tigers/100 km2. The estimated tiger density matched predictions based on prey abundance. Our results suggest that, if managed appropriately, the available dry forest habitat in India has the potential to support a population size of about 9000 wild tigers.

  4. Mindfulness Meditation for Substance Use Disorders: A Systematic Review

    PubMed Central

    Zgierska, Aleksandra; Rabago, David; Chawla, Neharika; Kushner, Kenneth; Koehler, Robert; Marlatt, Allan

    2009-01-01

    Relapse is common in substance use disorders (SUDs), even among treated individuals. The goal of this article was to systematically review the existing evidence on mindfulness meditation-based interventions (MM) for SUDs. The comprehensive search for and review of literature found over 2,000 abstracts and resulted in 25 eligible manuscripts (22 published, 3 unpublished: 8 RCTs, 7 controlled non-randomized, 6 non-controlled prospective, 2 qualitative studies, 1 case report). When appropriate, methodological quality, absolute risk reduction, number needed to treat, and effect size (ES) were assessed. Overall, although preliminary evidence suggests MM efficacy and safety, conclusive data for MM as a treatment of SUDs are lacking. Significant methodological limitations exist in most studies. Further, it is unclear which persons with SUDs might benefit most from MM. Future trials must be of sufficient sample size to answer a specific clinical question and should target both assessment of effect size and mechanisms of action. PMID:19904664

  5. Bioelectrical impedance analysis: A new tool for assessing fish condition

    USGS Publications Warehouse

    Hartman, Kyle J.; Margraf, F. Joseph; Hafs, Andrew W.; Cox, M. Keith

    2015-01-01

    Bioelectrical impedance analysis (BIA) is commonly used in human health and nutrition fields but has only recently been considered as a potential tool for assessing fish condition. Once BIA is calibrated, it estimates fat/moisture levels and energy content without the need to kill fish. Despite the promise held by BIA, published studies have been divided on whether BIA can provide accurate estimates of body composition in fish. In cases where BIA was not successful, the models lacked the range of fat levels or sample sizes we determined were needed for model success (range of dry fat levels of 29%, n = 60, yielding an R2 of 0.8). Reduced range of fat levels requires an increased sample size to achieve that benchmark; therefore, standardization of methods is needed. Here we discuss standardized methods based on a decade of research, identify sources of error, discuss where BIA is headed, and suggest areas for future research.

  6. A new tool called DISSECT for analysing large genomic data sets using a Big Data approach

    PubMed Central

    Canela-Xandri, Oriol; Law, Andy; Gray, Alan; Woolliams, John A.; Tenesa, Albert

    2015-01-01

    Large-scale genetic and genomic data are increasingly available and the major bottleneck in their analysis is a lack of sufficiently scalable computational tools. To address this problem in the context of complex traits analysis, we present DISSECT. DISSECT is a new and freely available software that is able to exploit the distributed-memory parallel computational architectures of compute clusters, to perform a wide range of genomic and epidemiologic analyses, which currently can only be carried out on reduced sample sizes or under restricted conditions. We demonstrate the usefulness of our new tool by addressing the challenge of predicting phenotypes from genotype data in human populations using mixed-linear model analysis. We analyse simulated traits from 470,000 individuals genotyped for 590,004 SNPs in ∼4 h using the combined computational power of 8,400 processor cores. We find that prediction accuracies in excess of 80% of the theoretical maximum could be achieved with large sample sizes. PMID:26657010

  7. Physical characterization of whole and skim dried milk powders.

    PubMed

    Pugliese, Alessandro; Cabassi, Giovanni; Chiavaro, Emma; Paciulli, Maria; Carini, Eleonora; Mucchetti, Germano

    2017-10-01

    The lack of updated knowledge about the physical properties of milk powders aimed us to evaluate selected physical properties (water activity, particle size, density, flowability, solubility and colour) of eleven skim and whole milk powders produced in Europe. These physical properties are crucial both for the management of milk powder during the final steps of the drying process, and for their use as food ingredients. In general, except for the values of water activity, the physical properties of skim and whole milk powders are very different. Particle sizes of the spray-dried skim milk powders, measured as volume and surface mean diameter were significantly lower than that of the whole milk powders, while the roller dried sample showed the largest particle size. For all the samples the size distribution was quite narrow, with a span value less than 2. The loose density of skim milk powders was significantly higher than whole milk powders (541.36 vs 449.75 kg/m 3 ). Flowability, measured by Hausner ratio and Carr's index indicators, ranged from passable to poor when evaluated according to pharmaceutical criteria. The insolubility index of the spray-dried skim and whole milk powders, measured as weight of the sediment (from 0.5 to 34.8 mg), allowed a good discrimination of the samples. Colour analysis underlined the relevant contribution of fat content and particle size, resulted in higher lightness ( L *) for skim milk powder than whole milk powder, which, on the other hand, showed higher yellowness ( b *) and lower greenness (- a *). In conclusion a detailed knowledge of functional properties of milk powders may allow the dairy to tailor the products to the user and help the food processor to perform a targeted choice according to the intended use.

  8. Coalescent: an open-science framework for importance sampling in coalescent theory.

    PubMed

    Tewari, Susanta; Spouge, John L

    2015-01-01

    Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.

  9. Effect of maternal body mass index on hormones in breast milk: a systematic review.

    PubMed

    Andreas, Nicholas J; Hyde, Matthew J; Gale, Chris; Parkinson, James R C; Jeffries, Suzan; Holmes, Elaine; Modi, Neena

    2014-01-01

    Maternal Body Mass Index (BMI) is positively associated with infant obesity risk. Breast milk contains a number of hormones that may influence infant metabolism during the neonatal period; these may have additional downstream effects on infant appetite regulatory pathways, thereby influencing propensity towards obesity in later life. To conduct a systematic review of studies examining the association between maternal BMI and the concentration of appetite-regulating hormones in breast milk. Pubmed was searched for studies reporting the association between maternal BMI and leptin, adiponectin, insulin, ghrelin, resistin, obestatin, Peptide YY and Glucagon-Like Peptide 1 in breast milk. Twenty six studies were identified and included in the systematic review. There was a high degree of variability between studies with regard to collection, preparation and analysis of breast milk samples. Eleven of fifteen studies reporting breast milk leptin found a positive association between maternal BMI and milk leptin concentration. Two of nine studies investigating adiponectin found an association between maternal BMI and breast milk adiponectin concentration; however significance was lost in one study following adjustment for time post-partum. No association was seen between maternal BMI and milk adiponectin in the other seven studies identified. Evidence for an association between other appetite regulating hormones and maternal BMI was either inconclusive, or lacking. A positive association between maternal BMI and breast milk leptin concentration is consistently found in most studies, despite variable methodology. Evidence for such an association with breast milk adiponectin concentration, however, is lacking with additional research needed for other hormones including insulin, ghrelin, resistin, obestatin, peptide YY and glucagon-like peptide-1. As most current studies have been conducted with small sample sizes, future studies should ensure adequate sample sizes and standardized methodology.

  10. Inadequacy of Conventional Grab Sampling for Remediation Decision-Making for Metal Contamination at Small-Arms Ranges.

    PubMed

    Clausen, J L; Georgian, T; Gardner, K H; Douglas, T A

    2018-01-01

    Research shows grab sampling is inadequate for evaluating military ranges contaminated with energetics because of their highly heterogeneous distribution. Similar studies assessing the heterogeneous distribution of metals at small-arms ranges (SAR) are lacking. To address this we evaluated whether grab sampling provides appropriate data for performing risk analysis at metal-contaminated SARs characterized with 30-48 grab samples. We evaluated the extractable metal content of Cu, Pb, Sb, and Zn of the field data using a Monte Carlo random resampling with replacement (bootstrapping) simulation approach. Results indicate the 95% confidence interval of the mean for Pb (432 mg/kg) at one site was 200-700 mg/kg with a data range of 5-4500 mg/kg. Considering the U.S. Environmental Protection Agency screening level for lead is 400 mg/kg, the necessity of cleanup at this site is unclear. Resampling based on populations of 7 and 15 samples, a sample size more realistic for the area yielded high false negative rates.

  11. Sexual dimorphism in human cranial trait scores: effects of population, age, and body size.

    PubMed

    Garvin, Heather M; Sholts, Sabrina B; Mosca, Laurel A

    2014-06-01

    Sex estimation from the skull is commonly performed by physical and forensic anthropologists using a five-trait scoring system developed by Walker. Despite the popularity of this method, validation studies evaluating its accuracy across a variety of samples are lacking. Furthermore, it remains unclear what other intrinsic or extrinsic variables are related to the expression of these traits. In this study, cranial trait scores and postcranial measurements were collected from four diverse population groups (U.S. Whites, U.S. Blacks, medieval Nubians, and Arikara Native Americans) following Walker's protocols (total n = 499). Univariate and multivariate analyses were utilized to evaluate the accuracy of these traits in sex estimation, and to test for the effects of population, age, and body size on trait expressions. Results revealed significant effects of population on all trait scores. Sample-specific correct sex classification rates ranged from 74% to 94%, with an overall accuracy of 85% for the pooled sample. Classification performance varied among the traits (best for glabella and mastoid scores and worst for nuchal scores). Furthermore, correlations between traits were weak or nonsignificant, suggesting that different factors may influence individual traits. Some traits displayed correlations with age and/or postcranial size that were significant but weak, and within-population analyses did not reveal any consistent relationships between these traits across all groups. These results indicate that neither age nor body size plays a large role in trait expression, and thus does not need to be incorporated into sex estimation methods. Copyright © 2014 Wiley Periodicals, Inc.

  12. On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.

    PubMed

    Westgate, Philip M; Burchett, Woodrow W

    2017-03-15

    The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Food selection and feeding relationships of yellow perch 'Perca flavescens' (mitchell), white bass 'Morone chrysops' (rafinesque), freshwater drum 'Aplodinotus grunniens' (rafinesque), and goldfish 'Carassius auratus' (linneaus) in western Lake Erie. Interim report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenaga, D.E.; Cole, R.A.

    1975-10-01

    The study was undertaken as part of an investigation of the impact of once through cooling at a large power plant in western Lake Erie and is an attempt to assess the relationship among fish based on foods consumed. Potential food organisms and stomach contents of yellow perch, white bass, freshwater drum and goldfish were sampled and compared over a two year period. On the basis of differences in food size alone, young of the year fish did not appear to be in competition but as they became larger, all but goldfish consumed the same mean size foods. Within amore » fish species, mean prey size varied little in fish older than age class zero. Goldfish differed markedly by lacking the prey size selectivity demonstrated by the other fish species. Some ramifications of food size and prey selectivity in relation to trophic dynamics, feeding efficiency, composition and distribution of fish species, and the use of cooling water by large power plants and their possible impact upon prey sizes are discussed. (GRA)« less

  14. Size exclusion chromatography with online ICP-MS enables molecular weight fractionation of dissolved phosphorus species in water samples.

    PubMed

    Venkatesan, Arjun K; Gan, Wenhui; Ashani, Harsh; Herckes, Pierre; Westerhoff, Paul

    2018-04-15

    Phosphorus (P) is an important and often limiting element in terrestrial and aquatic ecosystem. A lack of understanding of its distribution and structures in the environment limits the design of effective P mitigation and recovery approaches. Here we developed a robust method employing size exclusion chromatography (SEC) coupled to an ICP-MS to determine the molecular weight (MW) distribution of P in environmental samples. The most abundant fraction of P varied widely in different environmental samples: (i) orthophosphate was the dominant fraction (93-100%) in one lake, two aerosols and DOC isolate samples, (ii) species of 400-600 Da range were abundant (74-100%) in two surface waters, and (iii) species of 150-350 Da range were abundant in wastewater effluents. SEC-DOC of the aqueous samples using a similar SEC column showed overlapping peaks for the 400-600 Da species in two surface waters, and for >20 kDa species in the effluents, suggesting that these fractions are likely associated with organic matter. The MW resolution and performance of SEC-ICP-MS agreed well with the time integrated results obtained using conventional ultrafiltration method. Results show that SEC in combination with ICP-MS and DOC has the potential to be a powerful and easy-to-use method in identifying unknown fractions of P in the environment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    A test program to collect and analyze size-fractionated stack gas particulate samples for selected inorganic hazardous air pollutants (HAPs) was conducted . Specific goals of the program are (1) the collection of one-gram quantities of size-fractionated stack gas particulate matter for bulk (total) and surface chemical characterization, and (2) the determination of the relationship between particle size, bulk and surface (leachable) composition, and unit load. The information obtained from this program identifies the effects of unit load, particle size, and wet FGD system operation on the relative toxicological effects of exposure to particulate emissions. Field testing was conducted in twomore » phases. The Phase I field program was performed over the period of August 24 through September 20, 1992, at the Tennessee Valley Authority Widows Creek Unit 8 Power Station, located near Stevenson (Jackson County), Alabama, on the Tennessee River. Sampling activities for Phase II were conducted from September 11 through October 14, 1993. Widows Creek Unit 8 is a 575-megawatt plant that uses bituminous coal averaging 3.7% sulfur and 13% ash. Downstream of the boiler, a venture wet scrubbing system is used for control of both sulfur dioxide and particulate emissions. There is no electrostatic precipitator (ESP) in this system. This system is atypical and represents only about 5% of the US utility industry. However, this site was chosen for this study because of the lack of information available for this particulate emission control system.« less

  16. Improving the accuracy of effect-directed analysis: the role of bioavailability.

    PubMed

    You, Jing; Li, Huizhen

    2017-12-13

    Aquatic ecosystems have been suffering from contamination by multiple stressors. Traditional chemical-based risk assessment usually fails to explain the toxicity contributions from contaminants that are not regularly monitored or that have an unknown identity. Diagnosing the causes of noted adverse outcomes in the environment is of great importance in ecological risk assessment and in this regard effect-directed analysis (EDA) has been designed to fulfill this purpose. The EDA approach is now increasingly used in aquatic risk assessment owing to its specialty in achieving effect-directed nontarget analysis; however, a lack of environmental relevance makes conventional EDA less favorable. In particular, ignoring the bioavailability in EDA may cause a biased and even erroneous identification of causative toxicants in a mixture. Taking bioavailability into consideration is therefore of great importance to improve the accuracy of EDA diagnosis. The present article reviews the current status and applications of EDA practices that incorporate bioavailability. The use of biological samples is the most obvious way to include bioavailability into EDA applications, but its development is limited due to the small sample size and lack of evidence for metabolizable compounds. Bioavailability/bioaccessibility-based extraction (bioaccessibility-directed and partitioning-based extraction) and passive-dosing techniques are recommended to be used to integrate bioavailability into EDA diagnosis in abiotic samples. Lastly, the future perspectives of expanding and standardizing the use of biological samples and bioavailability-based techniques in EDA are discussed.

  17. Genome-wide meta-analyses of stratified depression in Generation Scotland and UK Biobank.

    PubMed

    Hall, Lynsey S; Adams, Mark J; Arnau-Soler, Aleix; Clarke, Toni-Kim; Howard, David M; Zeng, Yanni; Davies, Gail; Hagenaars, Saskia P; Maria Fernandez-Pujals, Ana; Gibson, Jude; Wigmore, Eleanor M; Boutin, Thibaud S; Hayward, Caroline; Scotland, Generation; Porteous, David J; Deary, Ian J; Thomson, Pippa A; Haley, Chris S; McIntosh, Andrew M

    2018-01-10

    Few replicable genetic associations for Major Depressive Disorder (MDD) have been identified. Recent studies of MDD have identified common risk variants by using a broader phenotype definition in very large samples, or by reducing phenotypic and ancestral heterogeneity. We sought to ascertain whether it is more informative to maximize the sample size using data from all available cases and controls, or to use a sex or recurrent stratified subset of affected individuals. To test this, we compared heritability estimates, genetic correlation with other traits, variance explained by MDD polygenic score, and variants identified by genome-wide meta-analysis for broad and narrow MDD classifications in two large British cohorts - Generation Scotland and UK Biobank. Genome-wide meta-analysis of MDD in males yielded one genome-wide significant locus on 3p22.3, with three genes in this region (CRTAP, GLB1, and TMPPE) demonstrating a significant association in gene-based tests. Meta-analyzed MDD, recurrent MDD and female MDD yielded equivalent heritability estimates, showed no detectable difference in association with polygenic scores, and were each genetically correlated with six health-correlated traits (neuroticism, depressive symptoms, subjective well-being, MDD, a cross-disorder phenotype and Bipolar Disorder). Whilst stratified GWAS analysis revealed a genome-wide significant locus for male MDD, the lack of independent replication, and the consistent pattern of results in other MDD classifications suggests that phenotypic stratification using recurrence or sex in currently available sample sizes is currently weakly justified. Based upon existing studies and our findings, the strategy of maximizing sample sizes is likely to provide the greater gain.

  18. Preparation of metagenomic libraries from naturally occurring marine viruses.

    PubMed

    Solonenko, Sergei A; Sullivan, Matthew B

    2013-01-01

    Microbes are now well recognized as major drivers of the biogeochemical cycling that fuels the Earth, and their viruses (phages) are known to be abundant and important in microbial mortality, horizontal gene transfer, and modulating microbial metabolic output. Investigation of environmental phages has been frustrated by an inability to culture the vast majority of naturally occurring diversity coupled with the lack of robust, quantitative, culture-independent methods for studying this uncultured majority. However, for double-stranded DNA phages, a quantitative viral metagenomic sample-to-sequence workflow now exists. Here, we review these advances with special emphasis on the technical details of preparing DNA sequencing libraries for metagenomic sequencing from environmentally relevant low-input DNA samples. Library preparation steps broadly involve manipulating the sample DNA by fragmentation, end repair and adaptor ligation, size fractionation, and amplification. One critical area of future research and development is parallel advances for alternate nucleic acid types such as single-stranded DNA and RNA viruses that are also abundant in nature. Combinations of recent advances in fragmentation (e.g., acoustic shearing and tagmentation), ligation reactions (adaptor-to-template ratio reference table availability), size fractionation (non-gel-sizing), and amplification (linear amplification for deep sequencing and linker amplification protocols) enhance our ability to generate quantitatively representative metagenomic datasets from low-input DNA samples. Such datasets are already providing new insights into the role of viruses in marine systems and will continue to do so as new environments are explored and synergies and paradigms emerge from large-scale comparative analyses. © 2013 Elsevier Inc. All rights reserved.

  19. Disease-Concordant Twins Empower Genetic Association Studies.

    PubMed

    Tan, Qihua; Li, Weilong; Vandin, Fabio

    2017-01-01

    Genome-wide association studies with moderate sample sizes are underpowered, especially when testing SNP alleles with low allele counts, a situation that may lead to high frequency of false-positive results and lack of replication in independent studies. Related individuals, such as twin pairs concordant for a disease, should confer increased power in genetic association analysis because of their genetic relatedness. We conducted a computer simulation study to explore the power advantage of the disease-concordant twin design, which uses singletons from disease-concordant twin pairs as cases and ordinary healthy samples as controls. We examined the power gain of the twin-based design for various scenarios (i.e., cases from monozygotic and dizygotic twin pairs concordant for a disease) and compared the power with the ordinary case-control design with cases collected from the unrelated patient population. Simulation was done by assigning various allele frequencies and allelic relative risks for different mode of genetic inheritance. In general, for achieving a power estimate of 80%, the sample sizes needed for dizygotic and monozygotic twin cases were one half and one fourth of the sample size of an ordinary case-control design, with variations depending on genetic mode. Importantly, the enriched power for dizygotic twins also applies to disease-concordant sibling pairs, which largely extends the application of the concordant twin design. Overall, our simulation revealed a high value of disease-concordant twins in genetic association studies and encourages the use of genetically related individuals for highly efficiently identifying both common and rare genetic variants underlying human complex diseases without increasing laboratory cost. © 2016 John Wiley & Sons Ltd/University College London.

  20. Survey design research: a tool for answering nursing research questions.

    PubMed

    Siedlecki, Sandra L; Butler, Robert S; Burchill, Christian N

    2015-01-01

    The clinical nurse specialist is in a unique position to identify and study clinical problems in need of answers, but lack of time and resources may discourage nurses from conducting research. However, some research methods can be used by the clinical nurse specialist that are not time-intensive or cost prohibitive. The purpose of this article is to explain the utility of survey methodology for answering a number of nursing research questions. The article covers survey content, reliability and validity issues, sample size considerations, and methods of survey delivery.

  1. Internal Control, CPA Recognition and Performance Consequence: Evidence from Chinese Real Estate Enterprises

    NASA Astrophysics Data System (ADS)

    Zhang, Chuan; Zhang, Lili; Geng, Yi

    In recent years, internal control has caught more and more attention over the whole globe. However, whether internal control could improve business efficiency also lacks the empirical supports. Based on a sample size of 146 Chinese real estate enterprises, this study analyses the CPA’s recognition degree on firm’s implementing internal control, and its performance consequence. The evidence suggests that CPAs are able to give exact evaluation on firm’s internal control implement, and the higher the internal control implemented, the better performance the enterprise will have.

  2. The sonic window: second generation results

    NASA Astrophysics Data System (ADS)

    Walker, William F.; Fuller, Michael I.; Brush, Edward V.; Eames, Matthew D. C.; Owen, Kevin; Ranganathan, Karthik; Blalock, Travis N.; Hossack, John A.

    2006-03-01

    Medical Ultrasound Imaging is widely used clinically because of its relatively low cost, portability, lack of ionizing radiation, and real-time nature. However, even with these advantages ultrasound has failed to permeate the broad array of clinical applications where its use could be of value. A prime example of this untapped potential is the routine use of ultrasound to guide intravenous access. In this particular application existing systems lack the required portability, low cost, and ease-of-use required for widespread acceptance. Our team has been working for a number of years to develop an extremely low-cost, pocket-sized, and intuitive ultrasound imaging system that we refer to as the "Sonic Window." We have previously described the first generation Sonic Window prototype that was a bench-top device using a 1024 element, fully populated array operating at a center frequency of 3.3 MHz. Through a high degree of custom front-end integration combined with multiplexing down to a 2 channel PC based digitizer this system acquired a full set of RF data over a course of 512 transmit events. While initial results were encouraging, this system exhibited limitations resulting from low SNR, relatively coarse array sampling, and relatively slow data acquisition. We have recently begun assembling a second-generation Sonic Window system. This system uses a 3600 element fully sampled array operating at 5.0 MHz with a 300 micron element pitch. This system extends the integration of the first generation system to include front-end protection, pre-amplification, a programmable bandpass filter, four sample and holds, and four A/D converters for all 3600 channels in a set of custom integrated circuits with a combined area smaller than the 1.8 x 1.8 cm footprint of the transducer array. We present initial results from this front-end and present benchmark results from a software beamformer implemented on the Analog Devices BF-561 DSP. We discuss our immediate plans for further integration and testing. This second prototype represents a major reduction in size and forms the foundation of a fully functional, fully integrated, pocket sized prototype.

  3. Processing, Microstructure, and Material Property Relationships Following Friction Stir Welding of Oxide Dispersion Strengthened Steels

    DTIC Science & Technology

    2013-09-01

    2.75), (b) 400 RPM/100 MMPM (HI= 4 ), (c) 300 RPM/50 MMPM (HI= 6 ), and (d) 500 RPM/25 MMPM (HI=10) showing increase in grain size as HI is increased...Heat Index Weld Quality Weld Penetration 200 50 4 Lack of Consolidation Incomplete 300 50 6 Defect-free Full 300 100 3 Lack of Consolidation...Specifically, the grain size for HI= 6 (300 RPM/50 MMPM) is less than the grain size for HI= 4 (400 RPM/100 MMPM); however, grain size did

  4. OpenMSI Arrayed Analysis Toolkit: Analyzing Spatially Defined Samples Using Mass Spectrometry Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Raad, Markus; de Rond, Tristan; Rübel, Oliver

    Mass spectrometry imaging (MSI) has primarily been applied in localizing biomolecules within biological matrices. Although well-suited, the application of MSI for comparing thousands of spatially defined spotted samples has been limited. One reason for this is a lack of suitable and accessible data processing tools for the analysis of large arrayed MSI sample sets. In this paper, the OpenMSI Arrayed Analysis Toolkit (OMAAT) is a software package that addresses the challenges of analyzing spatially defined samples in MSI data sets. OMAAT is written in Python and is integrated with OpenMSI (http://openmsi.nersc.gov), a platform for storing, sharing, and analyzing MSI data.more » By using a web-based python notebook (Jupyter), OMAAT is accessible to anyone without programming experience yet allows experienced users to leverage all features. OMAAT was evaluated by analyzing an MSI data set of a high-throughput glycoside hydrolase activity screen comprising 384 samples arrayed onto a NIMS surface at a 450 μm spacing, decreasing analysis time >100-fold while maintaining robust spot-finding. The utility of OMAAT was demonstrated for screening metabolic activities of different sized soil particles, including hydrolysis of sugars, revealing a pattern of size dependent activities. Finally, these results introduce OMAAT as an effective toolkit for analyzing spatially defined samples in MSI. OMAAT runs on all major operating systems, and the source code can be obtained from the following GitHub repository: https://github.com/biorack/omaat.« less

  5. A survey of FRAXE allele sizes in three populations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, N.; Ju, W.; Curley, D.

    1996-08-09

    FRAXE is a fragile site located at Xq27-8, which contains polymorphic triplet GCC repeats associated with a CpG island. Similar to FRAXA, expansion of the GCC repeats results in an abnormal methylation of the CpG island and is associated with a mild mental retardation syndrome (FRAXE-MR). We surveyed the GCC repeat alleles of FRAXE from 3 populations. A total of 665 X chromosomes including 416 from a New York Euro-American sample (259 normal and 157 with FRAXA mutations), 157 from a Chinese sample (144 normal and 13 FRAXA), and 92 from a Finnish sample (56 normal and 36 FRAXA) weremore » analyzed by polymerase chain reaction. Twenty-seven alleles, ranging from 4 to 39 GCC repeats, were observed. The modal repeat number was 16 in the New York and Finnish samples and accounted for 24% of all the chromosomes tested (162/665). The modal repeat number in the Chinese sample was 18. A founder effect for FRAXA was suggested among the Finnish FRAXA samples in that 75% had the FRAXE 16 repeat allele versus only 30% of controls. Sequencing of the FRAXE region showed no imperfections within the GCC repeat region, such as those commonly seen in FRAXA. The smaller size and limited range of repeats and the lack of imperfections suggests the molecular mechanisms underlying FRAXE triplet mutations may be different from those underlying FRAXA. 27 refs., 4 figs., 1 tab.« less

  6. Drivers and Spatio-Temporal Extent of Hyporheic Patch Variation: Implications for Sampling

    PubMed Central

    Braun, Alexander; Auerswald, Karl; Geist, Juergen

    2012-01-01

    The hyporheic zone in stream ecosystems is a heterogeneous key habitat for species across many taxa. Consequently, it attracts high attention among freshwater scientists, but generally applicable guidelines on sampling strategies are lacking. Thus, the objective of this study was to develop and validate such sampling guidelines. Applying geostatistical analysis, we quantified the spatio-temporal variability of parameters, which characterize the physico-chemical substratum conditions in the hyporheic zone. We investigated eight stream reaches in six small streams that are typical for the majority of temperate areas. Data was collected on two occasions in six stream reaches (development data), and once in two additional reaches, after one year (validation data). In this study, the term spatial variability refers to patch contrast (patch to patch variance) and patch size (spatial extent of a patch). Patch contrast of hyporheic parameters (specific conductance, pH and dissolved oxygen) increased with macrophyte cover (r2 = 0.95, p<0.001), while patch size of hyporheic parameters decreased from 6 to 2 m with increasing sinuosity of the stream course (r2 = 0.91, p<0.001), irrespective of the time of year. Since the spatial variability of hyporheic parameters varied between stream reaches, our results suggest that sampling design should be adapted to suit specific stream reaches. The distance between sampling sites should be inversely related to the sinuosity, while the number of samples should be related to macrophyte cover. PMID:22860053

  7. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    PubMed Central

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  8. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    PubMed

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  9. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  10. A practical approach to the treatment of depression in patients with chronic kidney disease and end-stage renal disease.

    PubMed

    Hedayati, S Susan; Yalamanchili, Venkata; Finkelstein, Fredric O

    2012-02-01

    Depression is a common, under-recognized, and under-treated problem that is independently associated with increased morbidity and mortality in CKD patients. However, only a minority of CKD patients with depression are treated with antidepressant medications or nonpharmacologic therapy. Reasons for low treatment rates include a lack of properly controlled trials that support or refute efficacy and safety of various treatment regimens in CKD patients. The aim of this manuscript is to provide a comprehensive review of studies exploring depression treatment options in CKD. Observational studies as well as small trials suggest that certain serotonin-selective reuptake inhibitors may be safe to use in patients with advanced CKD and ESRD. These studies were limited by small sample sizes, lack of placebo control, and lack of formal assessment for depression diagnosis. Nonpharmacologic treatments were explored in selected ESRD samples. The most promising data were reported for frequent hemodialysis and cognitive behavioral therapy. Alternative proposed therapies include exercise training regimens, treatment of anxiety, and music therapy. Given the association of depression with cardiovascular events and mortality, and the excessive rates of cardiovascular death in CKD, it becomes imperative to not only investigate whether treatment of depression is efficacious, but also whether it would result in a reduction in morbidity and mortality in this patient population.

  11. Measurement of Circumstellar Disk Sizes in the Upper Scorpius OB Association with ALMA

    NASA Astrophysics Data System (ADS)

    Barenfeld, Scott A.; Carpenter, John M.; Sargent, Anneila I.; Isella, Andrea; Ricci, Luca

    2017-12-01

    We present detailed modeling of the spatial distributions of gas and dust in 57 circumstellar disks in the Upper Scorpius OB Association observed with ALMA at submillimeter wavelengths. We fit power-law models to the dust surface density and CO J = 3–2 surface brightness to measure the radial extent of dust and gas in these disks. We found that these disks are extremely compact: the 25 highest signal-to-noise disks have a median dust outer radius of 21 au, assuming an {R}-1 dust surface density profile. Our lack of CO detections in the majority of our sample is consistent with these small disk sizes assuming the dust and CO share the same spatial distribution. Of seven disks in our sample with well-constrained dust and CO radii, four appear to be more extended in CO, although this may simply be due to the higher optical depth of the CO. Comparison of the Upper Sco results with recent analyses of disks in Taurus, Ophiuchus, and Lupus suggests that the dust disks in Upper Sco may be approximately three times smaller in size than their younger counterparts, although we caution that a more uniform analysis of the data across all regions is needed. We discuss the implications of these results for disk evolution.

  12. Body Size, Fecundity, and Sexual Size Dimorphism in the Neotropical Cricket Macroanaxipha macilenta (Saussure) (Orthoptera: Gryllidae).

    PubMed

    Cueva Del Castillo, R

    2015-04-01

    Body size is directly or indirectly correlated with fitness. Body size, which conveys maximal fitness, often differs between sexes. Sexual size dimorphism (SSD) evolves because body size tends to be related to reproductive success through different pathways in males and females. In general, female insects are larger than males, suggesting that natural selection for high female fecundity could be stronger than sexual selection in males. I assessed the role of body size and fecundity in SSD in the Neotropical cricket Macroanaxipha macilenta (Saussure). This species shows a SSD bias toward males. Females did not present a correlation between number of eggs and body size. Nonetheless, there were fluctuations in the number of eggs carried by females during the sampling period, and the size of females that were collected carrying eggs was larger than that of females collected with no eggs. Since mating induces vitellogenesis in some cricket species, differences in female body size might suggest male mate choice. Sexual selection in the body size of males of M. macilenta may possibly be stronger than the selection of female fecundity. Even so, no mating behavior was observed during the field observations, including audible male calling or courtship songs, yet males may produce ultrasonic calls due to their size. If female body size in M. macilenta is not directly related to fecundity, the lack of a correlated response to selection on female body size could represent an alternate evolutionary pathway in the evolution of body size and SSD in insects.

  13. Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.

    PubMed

    Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J

    2015-06-15

    Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance component = 6.2), rather than due to pasture (variance component = 0.55) or season (variance component = 0.15). Using the observed distribution of L3, the required sample size (i.e. number of plots per pasture) for sampling a pasture through random plots with a particular precision was simulated. A higher relative precision was acquired when estimating PLC on pastures with a high larval contamination and a low level of aggregation compared to pastures with a low larval contamination when the same sample size was applied. In the future, herbage sampling through random plots across pasture (method 2) seems a promising method to develop further as no significant difference in counts between the methods was found and this method was less time consuming. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Semi-automatic surface sediment sampling system - A prototype to be implemented in bivalve fishing surveys

    NASA Astrophysics Data System (ADS)

    Rufino, Marta M.; Baptista, Paulo; Pereira, Fábio; Gaspar, Miguel B.

    2018-01-01

    In the current work we propose a new method to sample surface sediment during bivalve fishing surveys. Fishing institutes all around the word carry out regular surveys with the aim of monitoring the stocks of commercial species. These surveys comprise often more than one hundred of sampling stations and cover large geographical areas. Although superficial sediment grain sizes are among the main drivers of benthic communities and provide crucial information for studies on coastal dynamics, overall there is a strong lack of this type of data, possibly, because traditional surface sediment sampling methods use grabs, that require considerable time and effort to be carried out on regular basis or on large areas. In face of these aspects, we developed an easy and un-expensive method to sample superficial sediments, during bivalve fisheries monitoring surveys, without increasing survey time or human resources. The method was successfully evaluated and validated during a typical bivalve survey carried out on the Northwest coast of Portugal, confirming that it had any interference with the survey objectives. Furthermore, the method was validated by collecting samples using a traditional Van Veen grabs (traditional method), which showed a similar grain size composition to the ones collected by the new method, on the same localities. We recommend that the procedure is implemented on regular bivalve fishing surveys, together with an image analysis system to analyse the collected samples. The new method will provide substantial quantity of data on surface sediment in coastal areas, using a non-expensive and efficient manner, with a high potential application in different fields of research.

  15. Access to and use of health services as factors associated with neonatal mortality in the North, Northeast, and Vale do Jequitinhonha regions, Brazil.

    PubMed

    Batista, Cristiane B; Carvalho, Márcia L de; Vasconcelos, Ana Glória G

    To analyze the factors associated with neonatal mortality related to health services accessibility and use. Case-control study of live births in 2008 in small- and medium-sized municipalities in the North, Northeast, and Vale do Jequitinhonha regions, Brazil. A probabilistic sample stratified by region, population size, and information adequacy was generated for the choice of municipalities. Of these, all municipalities with 20,000 inhabitants or less were included in the study (36 municipalities), whereas the remainder were selected according to the probability method proportional to population size, totaling 20 cities with 20,001-50,000 inhabitants and 19 municipalities with 50,001-200,000 inhabitants. All deaths of live births in these cities were included. Controls were randomly sampled, considered as four times the number of cases. The sample size comprised 412 cases and 1772 controls. Hierarchical multiple logistic regression was used for data analysis. The risk factors for neonatal death were socioeconomic class D and E (OR=1.28), history of child death (OR=1.74), high-risk pregnancy (OR=4.03), peregrination in antepartum (OR=1.46), lack of prenatal care (OR=2.81), absence of professional for the monitoring of labor (OR=3.34), excessive time waiting for delivery (OR=1.97), borderline preterm birth (OR=4.09) and malformation (OR=13.66). These results suggest multiple causes of neonatal mortality, as well as the need to improve access to good quality maternal-child health care services in the assessed places of study. Copyright © 2017 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.

  16. Power and sensitivity of alternative fit indices in tests of measurement invariance.

    PubMed

    Meade, Adam W; Johnson, Emily C; Braddy, Phillip W

    2008-05-01

    Confirmatory factor analytic tests of measurement invariance (MI) based on the chi-square statistic are known to be highly sensitive to sample size. For this reason, G. W. Cheung and R. B. Rensvold (2002) recommended using alternative fit indices (AFIs) in MI investigations. In this article, the authors investigated the performance of AFIs with simulated data known to not be invariant. The results indicate that AFIs are much less sensitive to sample size and are more sensitive to a lack of invariance than chi-square-based tests of MI. The authors suggest reporting differences in comparative fit index (CFI) and R. P. McDonald's (1989) noncentrality index (NCI) to evaluate whether MI exists. Although a general value of change in CFI (.002) seemed to perform well in the analyses, condition specific change in McDonald's NCI values exhibited better performance than a single change in McDonald's NCI value. Tables of these values are provided as are recommendations for best practices in MI testing. PsycINFO Database Record (c) 2008 APA, all rights reserved.

  17. Linear models for airborne-laser-scanning-based operational forest inventory with small field sample size and highly correlated LiDAR data

    USGS Publications Warehouse

    Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.

    2015-01-01

    Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.

  18. A systematic review of the efficacy of venlafaxine for the treatment of fibromyalgia.

    PubMed

    VanderWeide, L A; Smith, S M; Trinkley, K E

    2015-02-01

    Fibromyalgia is a painful disease affecting 1-2% of the United States population. Serotonin and norepinephrine reuptake inhibitors (SNRIs), such as duloxetine and milnacipran, are well studied and frequently used for treating this disorder. However, efficacy data are limited for the SNRI venlafaxine despite its use in nearly a quarter of patients with fibromyalgia. Accordingly, we systematically reviewed the efficacy of venlafaxine for treatment of fibromyalgia. PubMed, Web of Science and the Cochrane Database were searched using the terms 'venlafaxine' and 'fibromyalgia'. Results were classified as primary studies or review articles based on abstract review. References of review articles were evaluated to ensure no primary studies evaluating venlafaxine were overlooked. All clinical studies that investigated venlafaxine for the treatment of fibromyalgia were included and graded on strength of evidence. Five studies met the inclusion criteria, including 4 open-label cohort studies and 1 randomized, controlled trial. Study durations ranged from 6 weeks to 6 months, and study sizes ranged from 11 to 102 participants. Four of the five published studies reported improvement in at least one outcome. Generally consistent improvements were observed in pain-related outcome measures, including the Fibromyalgia Impact Questionnaire (range, 26-29% reduction; n = 2 studies), Visual Analog Scale (range, 36-45% reduction; n = 2 studies), McGill Pain Questionnaire (48% reduction; n = 1 study) and Clinical Global Impression scale (51% had significant score change; n = 1 study). However, the few studies identified were limited by small sample size, inconsistent use of outcomes and methodological concerns. Studies assessing the efficacy of venlafaxine in the treatment of fibromyalgia to date have been limited by small sample size, inconsistent venlafaxine dosing, lack of placebo control and lack of blinding. In the context of these limitations, venlafaxine appears to be at least modestly effective in treating fibromyalgia. Larger randomized controlled trials are needed to further elucidate the full benefit of venlafaxine. © 2014 John Wiley & Sons Ltd.

  19. Identification of dust storm origin in South -West of Iran.

    PubMed

    Broomandi, Parya; Dabir, Bahram; Bonakdarpour, Babak; Rashidi, Yousef

    2017-01-01

    Deserts are the main sources of emitted dust, and are highly responsive to wind erosion. Low content of soil moisture and lack of vegetation cover lead to fine particle's release. One of the semi-arid bare lands in Iran, located in the South-West of Iran in Khoozestan province, was selected to investigate Sand and Dust storm potential. This paper focused on the metrological parameters of the sampling site, their changes and the relationship between these changes and dust storm occurrence, estimation of Reconaissance Drought Index, the Atterberg limits of soil samples and their relation with soil erosion ability, the chemical composition, size distribution of soil and airborne dust samples, and estimation of vertical mass flux by COMSALT through considering the effect of saffman force and interparticle cohesion forces during warm period (April-September) in 2010. The chemical compositions are measured with X-ray fluorescence, Atomic absorption spectrophotometer and X-ray diffraction. The particle size distribution analysis was conducted by using Laser particle size and sieve techniques. There was a strong negative correlation between dust storm occurrence and annual and seasonal rainfall and relative humidity. Positive strong correlation between annual and seasonal maximum temperature and dust storm frequency was seen. Estimation of RDI st in the studied period showed an extremely dry condition. Using the results of particle size distribution and soil consistency, the weak structure of soil was represented. X-ray diffraction analyses of soil and dust samples showed that soil mineralogy was dominated mainly by Quartz and calcite. X-ray fluorescence analyses of samples indicated that the most important major oxide compositions of the soil and airborne dust samples were SiO 2 , Al 2 O 3 , CaO, MgO, Na 2 O, and Fe 2 O 3 , demonstrating similar percentages for soil and dust samples. Estimation of Enrichment Factors for all studied trace elements in soil samples showed Br, Cl, Mo, S, Zn, and Hg with EF values higher than 10. The findings, showed the possible correlation between the degree of anthropogenic soil pollutants, and the remains of Iraq-Iran war. The results expressed sand and dust storm emission potential in this area, was illustrated with measured vertical mass fluxes by COMSALT.

  20. Monitoring nekton as a bioindicator in shallow estuarine habitats

    USGS Publications Warehouse

    Raposa, K.B.; Roman, C.T.; Heltshe, J.F.

    2003-01-01

    Long-term monitoring of estuarine nekton has many practical and ecological benefits but efforts are hampered by a lack of standardized sampling procedures. This study provides a rationale for monitoring nekton in shallow (< 1 m), temperate, estuarine habitats and addresses some important issues that arise when developing monitoring protocols. Sampling in seagrass and salt marsh habitats is emphasized due to the susceptibility of each habitat to anthropogenic stress and to the abundant and rich nekton assemblages that each habitat supports. Extensive sampling with quantitative enclosure traps that estimate nekton density is suggested. These gears have a high capture efficiency in most habitats and are small enough (e.g., 1 m(2)) to permit sampling in specific microhabitats. Other aspects of nekton monitoring are discussed, including spatial and temporal sampling considerations, station selection, sample size estimation, and data collection and analysis. Developing and initiating long-term nekton monitoring programs will help evaluate natural and human-induced changes in estuarine nekton over time and advance our understanding of the interactions between nekton and the dynamic estuarine environment.

  1. Laboratory evaluation of the Sequoia Scientific LISST-ABS acoustic backscatter sediment sensor

    USGS Publications Warehouse

    Snazelle, Teri T.

    2017-12-18

    Sequoia Scientific’s LISST-ABS is an acoustic backscatter sensor designed to measure suspended-sediment concentration at a point source. Three LISST-ABS were evaluated at the U.S. Geological Survey (USGS) Hydrologic Instrumentation Facility (HIF). Serial numbers 6010, 6039, and 6058 were assessed for accuracy in solutions with varying particle-size distributions and for the effect of temperature on sensor accuracy. Certified sediment samples composed of different ranges of particle size were purchased from Powder Technology Inc. These sediment samples were 30–80-micron (µm) Arizona Test Dust; less than 22-µm ISO 12103-1, A1 Ultrafine Test Dust; and 149-µm MIL-STD 810E Silica Dust. The sensor was able to accurately measure suspended-sediment concentration when calibrated with sediment of the same particle-size distribution as the measured. Overall testing demonstrated that sensors calibrated with finer sized sediments overdetect sediment concentrations with coarser sized sediments, and sensors calibrated with coarser sized sediments do not detect increases in sediment concentrations from small and fine sediments. These test results are not unexpected for an acoustic-backscatter device and stress the need for using accurate site-specific particle-size distributions during sensor calibration. When calibrated for ultrafine dust with a less than 22-µm particle size (silt) and with the Arizona Test Dust with a 30–80-µm range, the data from sensor 6039 were biased high when fractions of the coarser (149-µm) Silica Dust were added. Data from sensor 6058 showed similar results with an elevated response to coarser material when calibrated with a finer particle-size distribution and a lack of detection when subjected to finer particle-size sediment. Sensor 6010 was also tested for the effect of dissimilar particle size during the calibration and showed little effect. Subsequent testing revealed problems with this sensor, including an inadequate temperature compensation, making this data questionable. The sensor was replaced by Sequoia Scientific with serial number 6039. Results from the extended temperature testing showed proper temperature compensation for sensor 6039, and results from the dissimilar calibration/testing particle-size distribution closely corroborated the results from sensor 6058.

  2. 3-D breast anthropometry of plus-sized women in South Africa.

    PubMed

    Pandarum, Reena; Yu, Winnie; Hunter, Lawrance

    2011-09-01

    Exploratory retail studies in South Africa indicate that plus-sized women experience problems and dissatisfaction with poorly fitting bras. The lack of 3-D anthropometric studies for the plus-size women's bra market initiated this research. 3-D body torso measurements were collected from a convenience sample of 176 plus-sized women in South Africa. 3-D breast measurements extracted from the TC(2) NX12-3-D body scanner 'breast module' software were compared with traditional tape measurements. Regression equations show that the two methods of measurement were highly correlated although, on average, the bra cup size determining factor 'bust minus underbust' obtained from the 3-D method is approximately 11% smaller than that of the manual method. It was concluded that the total bust volume correlated with the quadrant volume (r = 0.81), cup length, bust length and bust prominence, should be selected as the overall measure of bust size and not the traditional bust girth and the underbust measurement. STATEMENT OF RELEVANCE: This study contributes new data and adds to the knowledge base of anthropometry and consumer ergonomics on bra fit and support, published in this, the Ergonomics Journal, by Chen et al. (2010) on bra fit and White et al. (2009) on breast support during overground running.

  3. Analyzing hidden populations online: topic, emotion, and social network of HIV-related users in the largest Chinese online community.

    PubMed

    Liu, Chuchu; Lu, Xin

    2018-01-05

    Traditional survey methods are limited in the study of hidden populations due to the hard to access properties, including lack of a sampling frame, sensitivity issue, reporting error, small sample size, etc. The rapid increase of online communities, of which members interact with others via the Internet, have generated large amounts of data, offering new opportunities for understanding hidden populations with unprecedented sample sizes and richness of information. In this study, we try to understand the multidimensional characteristics of a hidden population by analyzing the massive data generated in the online community. By elaborately designing crawlers, we retrieved a complete dataset from the "HIV bar," the largest bar related to HIV on the Baidu Tieba platform, for all records from January 2005 to August 2016. Through natural language processing and social network analysis, we explored the psychology, behavior and demand of online HIV population and examined the network community structure. In HIV communities, the average topic similarity among members is positively correlated to network efficiency (r = 0.70, p < 0.001), indicating that the closer the social distance between members of the community, the more similar their topics. The proportion of negative users in each community is around 60%, weakly correlated with community size (r = 0.25, p = 0.002). It is found that users suspecting initial HIV infection or first in contact with high-risk behaviors tend to seek help and advice on the social networking platform, rather than immediately going to a hospital for blood tests. Online communities have generated copious amounts of data offering new opportunities for understanding hidden populations with unprecedented sample sizes and richness of information. It is recommended that support through online services for HIV/AIDS consultation and diagnosis be improved to avoid privacy concerns and social discrimination in China.

  4. Pediatric reference intervals for general clinical chemistry components - merging of studies from Denmark and Sweden.

    PubMed

    Ridefelt, Peter; Hilsted, Linda; Juul, Anders; Hellberg, Dan; Rustad, Pål

    2018-05-28

    Reference intervals are crucial tools aiding clinicians when making medical decisions. However, for children such values often are lacking or incomplete. The present study combines data from separate pediatric reference interval studies of Denmark and Sweden in order to increase sample size and to include also pre-school children who were lacking in the Danish study. Results from two separate studies including 1988 healthy children and adolescents aged 6 months to 18 years of age were merged and recalculated. Eighteen general clinical chemistry components were measured on Abbott and Roche platforms. To facilitate commutability, the NFKK Reference Serum X was used. Age- and gender-specific pediatric reference intervals were defined by calculating 2.5 and 97.5 percentiles. The data generated are primarily applicable to a Nordic population, but could be used by any laboratory if validated for the local patient population.

  5. The impact of drugs for multiple sclerosis on sleep.

    PubMed

    Lanza, Giuseppe; Ferri, Raffaele; Bella, Rita; Ferini-Strambi, Luigi

    2017-01-01

    Although there is a growing literature on the presence of sleep disorders in multiple sclerosis (MS), few studies have specifically addressed the impact of drugs on sleep of these patients. Moreover, even when sleep is considered, quantitative assessment by standardized questionnaires or polysomnography is lacking. The studies that have been done highlight that interferon-beta and some symptomatic medications may affect sleep, thus contributing to fatigue, depression, and poor quality of life; conversely, natalizumab and cannabinoids may improve sleep. Common limitations of the literature reviewed here are small sample size, selection bias, and often a lack of objective outcome measures. Clinicians need to remember to ask about sleep in all MS patients and intervene when appropriate. A systematic approach that takes sleep into account is recommended to enhance recognition and appropriate management of sleep disruption, including disorders related to medication. Consideration of the impact on sleep should also be part of the design of trials of new therapies.

  6. First record of Ianiropsis cf. serricaudis in Maryland Coastal Bays, USA (Crustacea, Peracarida, Janiridae)

    PubMed Central

    Morales-Núñez, Andrés G; Chigbu, Paulinus

    2018-01-01

    Abstract During monthly sampling of benthic invertebrates at 13 stations in the Maryland Coastal Bays (MCBs) from March to December 2012, a total of 29 individuals of Ianiropsis cf. serricaudis were collected. This species is being reported for the first time in MCBs. A detailed illustration and description of an adult male of I. cf. serricaudis from MCBs is presented. An illustrated key of males of Ianiropsis species belonging to the palpalis-group is also presented. The size of the largest male was 3.0 mm and that of the largest female was 2.5 mm. It is possible that I. cf. serricaudis was present in the MCBs, but overlooked during previous surveys of marine benthic invertebrates in the area because of its small body size and lack of taxonomic expertise. PMID:29674907

  7. A community trial of the impact of improved sexually transmitted disease treatment on the HIV epidemic in rural Tanzania: 2. Baseline survey results.

    PubMed

    Grosskurth, H; Mosha, F; Todd, J; Senkoro, K; Newell, J; Klokke, A; Changalucha, J; West, B; Mayaud, P; Gavyole, A

    1995-08-01

    To determine baseline HIV prevalence in a trial of improved sexually transmitted disease (STD) treatment, and to investigate risk factors for HIV. To assess comparability of intervention and comparison communities with respect to HIV/STD prevalence and risk factors. To assess adequacy of sample size. Twelve communities in Mwanza Region, Tanzania: one matched pair of roadside communities, four pairs of rural communities, and one pair of island communities. One community from each pair was randomly allocated to receive the STD intervention following the baseline survey. Approximately 1000 adults aged 15-54 years were randomly sampled from each community. Subjects were interviewed, and HIV and syphilis serology performed. Men with a positive leucocyte esterase dipstick test on urine, or reporting a current STD, were tested for urethral infections. A total of 12,534 adults were enrolled. Baseline HIV prevalences were 7.7% (roadside), 3.8% (rural) and 1.8% (islands). Associations were observed with marital status, injections, education, travel, history of STD and syphilis serology. Prevalence was higher in circumcised men, but not significantly after adjusting for confounders. Intervention and comparison communities were similar in the prevalence of HIV (3.8 versus 4.4%), active syphilis (8.7 versus 8.2%), and most recorded risk factors. Within-pair variability in HIV prevalence was close to the value assumed for sample size calculations. The trial cohort was successfully established. Comparability of intervention and comparison communities at baseline was confirmed for most factors. Matching appears to have achieved a trial of adequate sample size. The apparent lack of a protective effect of male circumcision contrasts with other studies in Africa.

  8. An overview of the characterization of occupational exposure to nanoaerosols in workplaces

    NASA Astrophysics Data System (ADS)

    Castellano, Paola; Ferrante, Riccardo; Curini, Roberta; Canepari, Silvia

    2009-05-01

    Currently, there is a lack of standardized sampling and metric methods that can be applied to measure the level of exposure to nanosized aerosols. Therefore, any attempt to characterize exposure to nanoparticles (NP) in a workplace must involve a multifaceted approach characterized by different sampling and analytical techniques to measure all relevant characteristics of NP exposure. Furthermore, as NP aerosols are always complex mixtures of multiple origins, sampling and analytical methods need to be improved to selectively evaluate the apportionment from specific sources to the final nanomaterials. An open question at the world's level is how to relate specific toxic effects of NP with one or more among several different parameters (such as particle size, mass, composition, surface area, number concentration, aggregation or agglomeration state, water solubility and surface chemistry). As the evaluation of occupational exposure to NP in workplaces needs dimensional and chemical characterization, the main problem is the choice of the sampling and dimensional separation techniques. Therefore a convenient approach to allow a satisfactory risk assessment could be the contemporary use of different sampling and measuring techniques for particles with known toxicity in selected workplaces. Despite the lack of specific NP exposure limit values, exposure metrics, appropriate to nanoaerosols, are discussed in the Technical Report ISO/TR 27628:2007 with the aim to enable occupational hygienists to characterize and monitor nanoaerosols in workplaces. Moreover, NIOSH has developed the Document Approaches to Safe Nanotechnology (intended to be an information exchange with NIOSH) in order to address current and future research needs to understanding the potential risks that nanotechnology may have to workers.

  9. Allocation of limited reserves to a clutch: A model explaining the lack of a relationship between clutch size and egg size

    USGS Publications Warehouse

    Flint, Paul L.; Grand, James B.; Sedinger, James S.

    1996-01-01

    Lack (1967, 1968) proposed that clutch size in waterfowl is limited by the nutrients available to females when producing eggs. He suggested that if nutrients available for clutch formation are limited, then species producing small eggs would, on average, lay more eggs than species with large eggs. Rohwer (1988) argues that this model should also apply within species. Thus, the nutrition-limitation hypothesis predicts a tradeoff among females between clutch size and egg size (Rohwer 1988). Field studies of single species consistently have failed to detect a negative relationship between clutch size and egg size (Rohwer 1988, Lessells et al. 1992, Rohwer and Eisenhauer 1989, Flint and Sedinger 1992, Flint and Grand 1996). The absence of such a relationship within species has been regarded as evidence against the hypothesis that nutrient availability limits clutch size (Rohwer 1988, 1991, 1992; Rohwer and Eisenhauer 1989).

  10. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation.

    PubMed

    Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E

    2012-03-01

    In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  11. Absence of haemosporidian parasite infections in the long-lived Cory's shearwater: evidence from molecular analyses and review of the literature.

    PubMed

    Campioni, Letizia; Martínez-de la Puente, Josué; Figuerola, Jordi; Granadeiro, José Pedro; Silva, Mónica C; Catry, Paulo

    2018-01-01

    The apparent scarcity or absence of blood parasites in some avian groups, such as seabirds, has been related to intrinsic and extrinsic factors including host immunological capacity, host-parasite assemblage, and ecological parameters, but also to reduced sensitivity of some methods to detect low parasite prevalence/intensities of infection. Here, we examined the haemosporidian parasite prevalence in a breeding population of Cory's shearwater Calonectris diomedea borealis, a long-distance migrant seabird, nesting in the Macaronesian region, in the Eastern Atlantic. Previous studies on Calonectris diomedea complex were based on small sample sizes providing weak evidence for a lack of infections by haemoparasites. Here, we investigated the presence of both parasite infections in C. d. borealis and larvae of potential mosquito vectors on the area. By employing a PCR-based assay, we extensively examined the prevalence of blood parasites belonging to the genera Plasmodium, Haemoproteus, and Leucocytozoon in 286 individuals from different life stages (i.e., chicks, immatures, sabbatical, and breeding adults), facing their specific energetic trade-offs (immunological functions vs. life history activities). We sampled immatures and adult shearwaters, of different sexes, ages, and migratory origins, from two sub-colonies. None of the sampled individuals were infected by these parasites, supporting the hypothesis that there was no in situ or ex situ transmission of vector-borne parasites in marine habitats irrespective of host's life stage and in spite of the presence of the potential Plasmodium vector Culiseta longiareolata breeding in the area. These results suggest that the lack of transmission of haemosporidian parasites on Selvagem Grande may be related to the lack of suitable dipteran vectors at the study sites, which may result from the geographic isolation of this area.

  12. An image-based skeletal model for the ICRP reference adult male—specific absorbed fractions for neutron-generated recoil protons

    NASA Astrophysics Data System (ADS)

    Jokisch, D. W.; Rajon, D. A.; Bahadori, A. A.; Bolch, W. E.

    2011-11-01

    Recoiling hydrogen nuclei are a principle mechanism for energy deposition from incident neutrons. For neutrons incident on the human skeleton, the small sizes of two contrasting media (trabecular bone and marrow) present unique problems due to a lack of charged-particle (protons) equilibrium. Specific absorbed fractions have been computed for protons originating in the human skeletal tissues for use in computing neutron dose response functions. The proton specific absorbed fractions were computed using a pathlength-based range-energy calculation in trabecular skeletal samples of a 40 year old male cadaver.

  13. Infant formula samples: perinatal sources and breast-feeding outcomes at 1 month postpartum.

    PubMed

    Thurston, Amanda; Bolin, Jocelyn H; Chezem, Jo Carol

    2013-01-01

    The purpose was to describe sources of infant formula samples during the perinatal period and assess their associations with breast-feeding outcomes at 1 month postpartum. Subjects included expectant mothers who anticipated breast-feeding at least 1 month. Infant feeding history and sources of formula samples were obtained at 1 month postpartum. Associations between sources and breast-feeding outcomes were assessed using partial correlation. Of the 61 subjects who initiated breast-feeding, most were white (87%), married (75%), college-educated (75%), and planned exclusive breast-feeding (82%). Forty-two subjects (69%) continued breast-feeding at 1 month postpartum. Subjects received formula samples from the hospital (n = 40; 66%), physician's office (n = 10; 16%), and mail (n = 41; 67%). There were no significant correlations between formula samples from the hospital, physician's office, and/or mail and any or exclusive breast-feeding at 1 month (P > .05). In addition to the hospital, a long-standing source of formula samples, mail was also frequently reported as a route for distribution. The lack of statistically significant associations between formula samples and any or exclusive breast-feeding at 1 month may be related to small sample size and unique characteristics of the group studied.

  14. Effect of body size and body mass on δ 13 C and δ 15 N in coastal fishes and cephalopods

    NASA Astrophysics Data System (ADS)

    Vinagre, C.; Máguas, C.; Cabral, H. N.; Costa, M. J.

    2011-11-01

    Carbon and nitrogen isotopes have been widely used in the investigation of trophic relations, energy pathways, trophic levels and migrations, under the assumption that δ 13C is independent of body size and that variation in δ 15N occurs exclusively due to ontogenetic changes in diet and not body size increase per se. However, several studies have shown that these assumptions are uncertain. Data from food-webs containing an important number of species lack theoretical support on these assumptions because very few species have been tested for δ 13C and δ 15N variation in captivity. However, if sampling comprises a wide range of body sizes from various species, the variation of δ 13C and δ 15N with body size can be investigated. While correlation between body size and δ 13C and δ 15N can be due to ontogenetic diet shifts, stability in such values throughout the size spectrum can be considered an indication that δ 13C and δ 15N in muscle tissues of such species is independent of body size within that size range, and thus the basic assumptions can be applied in the interpretation of such food webs. The present study investigated the variation in muscle δ 13C and δ 15N with body size and body mass of coastal fishes and cephalopods. It was concluded that muscle δ 13C and δ 15N did not vary with body size or mass for all bony fishes with only one exception, the dragonet Callionymus lyra. Muscle δ 13C and δ 15N also did not vary with body size or mass in cartilaginous fishes and cephalopods, meaning that body size/mass per se have no effect on δ 13C or δ 15N, for most species analysed and within the size ranges sampled. The assumption that δ 13C is independent of body size and that variation in δ 15N is not affected by body size increase per se was upheld for most organisms and can be applied to the coastal food web studied taking into account that C. lyra is an exception.

  15. Sizes of abdominal organs in adults with severe short stature due to severe, untreated, congenital GH deficiency caused by a homozygous mutation in the GHRH receptor gene

    PubMed Central

    Oliveira, Carla R. P.; Salvatori, Roberto; Nóbrega, Luciana M. A.; Carvalho, Erick O. M.; Menezes, Menilson; Farias, Catarine T.; Britto, Allan V. O.; Pereira, Rossana M. C.; Aguiar-Oliveira, Manuel H.

    2008-01-01

    Summary Objective To assess the sizes of intra-abdominal organs of adult subjects with untreated severe congenital isolated GH deficiency (IGHD) due to lack of functional GHRH receptor (GHRH-R), and to verify whether there is proportionality between size of organ and adult stature and body surface area (BSA). Subjects and methods By using ultrasound, we studied the sizes (absolute and corrected by height, weight and BSA) of the intra-abdominal organs of 18 adult subjects with IGHD (eight females, IGHD group) who have never received GH replacement therapy. They were all homozygous for the same null mutation (IVS1 + 1G → A) in the GHRH receptor gene (GHRH-R). They were compared with normal controls from the same region. Results After correction for BSA, subjects lacking a functional GHRH-R have normal prostate and ovaries size, small spleen and uterus, and large liver, pancreas and kidney. Conclusions Size of individual abdominal organs is influenced in different ways by severe and congenital lack of GH due to a GHRH-R mutation. PMID:18034778

  16. [Application of statistics on chronic-diseases-relating observational research papers].

    PubMed

    Hong, Zhi-heng; Wang, Ping; Cao, Wei-hua

    2012-09-01

    To study the application of statistics on Chronic-diseases-relating observational research papers which were recently published in the Chinese Medical Association Magazines, with influential index above 0.5. Using a self-developed criterion, two investigators individually participated in assessing the application of statistics on Chinese Medical Association Magazines, with influential index above 0.5. Different opinions reached an agreement through discussion. A total number of 352 papers from 6 magazines, including the Chinese Journal of Epidemiology, Chinese Journal of Oncology, Chinese Journal of Preventive Medicine, Chinese Journal of Cardiology, Chinese Journal of Internal Medicine and Chinese Journal of Endocrinology and Metabolism, were reviewed. The rate of clear statement on the following contents as: research objectives, t target audience, sample issues, objective inclusion criteria and variable definitions were 99.43%, 98.57%, 95.43%, 92.86% and 96.87%. The correct rates of description on quantitative and qualitative data were 90.94% and 91.46%, respectively. The rates on correctly expressing the results, on statistical inference methods related to quantitative, qualitative data and modeling were 100%, 95.32% and 87.19%, respectively. 89.49% of the conclusions could directly response to the research objectives. However, 69.60% of the papers did not mention the exact names of the study design, statistically, that the papers were using. 11.14% of the papers were in lack of further statement on the exclusion criteria. Percentage of the papers that could clearly explain the sample size estimation only taking up as 5.16%. Only 24.21% of the papers clearly described the variable value assignment. Regarding the introduction on statistical conduction and on database methods, the rate was only 24.15%. 18.75% of the papers did not express the statistical inference methods sufficiently. A quarter of the papers did not use 'standardization' appropriately. As for the aspect of statistical inference, the rate of description on statistical testing prerequisite was only 24.12% while 9.94% papers did not even employ the statistical inferential method that should be used. The main deficiencies on the application of Statistics used in papers related to Chronic-diseases-related observational research were as follows: lack of sample-size determination, variable value assignment description not sufficient, methods on statistics were not introduced clearly or properly, lack of consideration for pre-requisition regarding the use of statistical inferences.

  17. Comparison of body weight and gene expression in amelogenin null and wild-type mice.

    PubMed

    Li, Yong; Yuan, Zhi-An; Aragon, Melissa A; Kulkarni, Ashok B; Gibson, Carolyn W

    2006-05-01

    Amelogenin (AmelX) null mice develop hypomineralized enamel lacking normal prism structure, but are healthy and fertile. Because these mice are smaller than wild-type mice prior to weaning, we undertook a detailed analysis of the weight of mice and analyzed AmelX expression in non-dental tissues. Wild-type mice had a greater average weight each day within the 3-wk period. Using reverse transcription-polymerase chain reaction (RT-PCR), products of approximately 200 bp in size were generated from wild-type teeth, brain, eye, and calvariae. DNA sequence analysis of RT-PCR products from calvariae indicated that the small amelogenin leucine-rich amelogenin peptide (LRAP), both with and without exon 4, was expressed. No products were obtained from any of the samples from the AmelX null mice. We also isolated mRNAs that included AmelX exons 8 and 9, and identified a duplication within the murine AmelX gene with 91% homology. Our results add additional support to the hypothesis that amelogenins are multifunctional proteins, with potential roles in non-ameloblasts and in non-mineralizing tissues during development. The smaller size of AmelX null mice could potentially be explained by the lack of LRAP expression in some of these tissues, leading to a delay in development.

  18. The contrasting nature of woody plant species in different neotropical forest biomes reflects differences in ecological stability.

    PubMed

    Pennington, R Toby; Lavin, Matt

    2016-04-01

    A fundamental premise of this review is that distinctive phylogenetic and biogeographic patterns in clades endemic to different major biomes illuminate the evolutionary process. In seasonally dry tropical forests (SDTFs), phylogenies are geographically structured and multiple individuals representing single species coalesce. This pattern of monophyletic species, coupled with their old species stem ages, is indicative of maintenance of small effective population sizes over evolutionary timescales, which suggests that SDTF is difficult to immigrate into because of persistent resident lineages adapted to a stable, seasonally dry ecology. By contrast, lack of coalescence in conspecific accessions of abundant and often widespread species is more frequent in rain forests and is likely to reflect large effective population sizes maintained over huge areas by effective seed and pollen flow. Species nonmonophyly, young species stem ages and lack of geographical structure in rain forest phylogenies may reflect more widespread disturbance by drought and landscape evolution causing resident mortality that opens up greater opportunities for immigration and speciation. We recommend full species sampling and inclusion of multiple accessions representing individual species in phylogenies to highlight nonmonophyletic species, which we predict will be frequent in rain forest and savanna, and which represent excellent case studies of incipient speciation. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.

  19. Lack of Association Between Maternal or Neonatal Vitamin D Status and Risk of Childhood Type 1 Diabetes: A Scandinavian Case-Cohort Study.

    PubMed

    Thorsen, Steffen U; Mårild, Karl; Olsen, Sjurdur F; Holst, Klaus K; Tapia, German; Granström, Charlotta; Halldorsson, Thorhallur I; Cohen, Arieh S; Haugen, Margaretha; Lundqvist, Marika; Skrivarhaug, Torild; Njølstad, Pål R; Joner, Geir; Magnus, Per; Størdal, Ketil; Svensson, Jannet; Stene, Lars C

    2018-06-01

    Studies on vitamin D status during pregnancy and risk of type 1 diabetes mellitus (T1D) lack consistency and are limited by small sample sizes or single measures of 25-hydroxyvitamin D (25(OH)D). We investigated whether average maternal 25(OH)D plasma concentrations during pregnancy are associated with risk of childhood T1D. In a case-cohort design, we identified 459 children with T1D and a random sample (n = 1,561) from the Danish National Birth Cohort (n = 97,127) and Norwegian Mother and Child Cohort Study (n = 113,053). Participants were born between 1996 and 2009. The primary exposure was the estimated average 25(OH)D concentration, based on serial samples from the first trimester until delivery and on umbilical cord plasma. We estimated hazard ratios using weighted Cox regression adjusting for multiple confounders. The adjusted hazard ratio for T1D per 10-nmol/L increase in the estimated average 25(OH)D concentration was 1.00 (95% confidence interval: 0.90, 1.10). Results were consistent in both cohorts, in multiple sensitivity analyses, and when we analyzed mid-pregnancy or cord blood separately. In conclusion, our large study demonstrated that normal variation in maternal or neonatal 25(OH)D is unlikely to have a clinically important effect on risk of childhood T1D.

  20. Increasing consistency of disease biomarker prediction across datasets.

    PubMed

    Chikina, Maria D; Sealfon, Stuart C

    2014-01-01

    Microarray studies with human subjects often have limited sample sizes which hampers the ability to detect reliable biomarkers associated with disease and motivates the need to aggregate data across studies. However, human gene expression measurements may be influenced by many non-random factors such as genetics, sample preparations, and tissue heterogeneity. These factors can contribute to a lack of agreement among related studies, limiting the utility of their aggregation. We show that it is feasible to carry out an automatic correction of individual datasets to reduce the effect of such 'latent variables' (without prior knowledge of the variables) in such a way that datasets addressing the same condition show better agreement once each is corrected. We build our approach on the method of surrogate variable analysis but we demonstrate that the original algorithm is unsuitable for the analysis of human tissue samples that are mixtures of different cell types. We propose a modification to SVA that is crucial to obtaining the improvement in agreement that we observe. We develop our method on a compendium of multiple sclerosis data and verify it on an independent compendium of Parkinson's disease datasets. In both cases, we show that our method is able to improve agreement across varying study designs, platforms, and tissues. This approach has the potential for wide applicability to any field where lack of inter-study agreement has been a concern.

  1. A feasibility study of colorectal cancer diagnosis via circulating tumor DNA derived CNV detection.

    PubMed

    Molparia, Bhuvan; Oliveira, Glenn; Wagner, Jennifer L; Spencer, Emily G; Torkamani, Ali

    2018-01-01

    Circulating tumor DNA (ctDNA) has shown great promise as a biomarker for early detection of cancer. However, due to the low abundance of ctDNA, especially at early stages, it is hard to detect at high accuracies while keeping sequencing costs low. Here we present a pilot stage study to detect large scale somatic copy numbers variations (CNVs), which contribute more molecules to ctDNA signal compared to point mutations, via cell free DNA sequencing. We show that it is possible to detect somatic CNVs in early stage colorectal cancer (CRC) patients and subsequently discriminate them from normal patients. With 25 normal and 24 CRC samples, we achieve 100% specificity (lower bound confidence interval: 86%) and ~79% sensitivity (95% confidence interval: 63% - 95%,), though the performance should be considered with caution given the limited sample size. We report a lack of concordance between the CNVs detected via cfDNA sequencing and CNVs identified in parent tissue samples. However, recent findings suggest that a lack of concordance is expected for CNVs in CRC because of their sub-clonal nature. Finally, the CNVs we detect very likely contribute to cancer progression as they lie in functionally important regions, and have been shown to be associated with CRC specifically. This study paves the path for a larger scale exploration of the potential of CNV detection for both diagnoses and prognoses of cancer.

  2. Microplastic distribution in global marine surface waters: results of an extensive citizen science study

    NASA Astrophysics Data System (ADS)

    Barrows, A.; Petersen, C.

    2017-12-01

    Plastic is a major pollutant throughout the world. The majority of the 322 million tons produced annually is used for single-use packaging. What makes plastic an attractive packaging material: cheap, light-weight and durable are also the features that help make it a common and persistent pollutant. There is a growing body of research on microplastic, particles less than 5 mm in size. Microfibers are the most common microplastic in the marine environment. Global estimates of marine microplastic surface concentrations are based on relatively small sample sizes when compared to the vast geographic scale of the ocean. Microplastic residence time and movement along the coast and sea surface outside of the gyres is still not well researched. This five-year project utilized global citizen scientists to collect 1,628 1-liter surface grab samples in every major ocean. The Artic and Southern oceans contained highest average of particles per liter of surface water. Open ocean samples (further than 12 nm from land, n = 686) contained a higher particle average (17 pieces L-1) than coastal samples (n = 723) 6 pieces L-1. Particles were predominantly 100 µm- 1.5 mm in length (77%), smaller than what has been captured in the majority of surface studies. Utilization of citizen scientists to collect data both in fairly accessible regions of the world as well as from areas hard to reach and therefore under sampled, provides us with a wider perspective of global microplastics occurrence. Our findings confirm global microplastic accumulation zone model predictions. The open ocean and poles have sequestered and trapped plastic for over half a century, and show that not only plastics, but anthropogenic fibers are polluting the environment. Continuing to fill knowledge gaps on microplastic shape, size and color in remote ocean areas will drive more accurate oceanographic models of plastic accumulation zones. Incorporation of smaller-sized particles in these models, which has previously been lacking, will help to better understand potential fate and transformation microplastic and anthropogenic particles in the marine environment.

  3. An evaluation of the ELT-8 hematology analyzer.

    PubMed

    Raik, E; McPherson, J; Barton, L; Hewitt, B S; Powell, E G; Gordon, S

    1982-04-01

    The TMELT-8 Hematology Analyzer is a fully automated cell counter which utilizes laser light scattering and hydrodynamic focusing to provide an 8 parameter whole blood count. The instrument consists of a sample handler with ticket printer, and a data handler with visual display unit, It accepts 100 microliter samples of venous or capillary blood and prints the values for WCC, RCC, Hb, Hct, MCV, MCH, MCHC and platelet count on to a standard result card. All operational and quality control functions, including graphic display of relative cell size distribution, can be obtained from the visual display unit and can also be printed as a permanent record if required. In a limited evaluation of the ELT-8, precision, linearity, accuracy, lack of sample carry-over and user acceptance were excellent. Reproducible values were obtained for all parameters after overnight storage of samples. Reagent usage and running costs were lower than for the Coulter S and the Coulter S Plus. The ease of processing capillary samples was considered to be a major advantage. The histograms served to alert the operator to a number of abnormalities, some of which were clinically significant.

  4. Evaluating change in bruise colorimetry and the effect of subject characteristics over time.

    PubMed

    Scafide, Katherine R N; Sheridan, Daniel J; Campbell, Jacquelyn; Deleon, Valerie B; Hayat, Matthew J

    2013-09-01

    Forensic clinicians are routinely asked to estimate the age of cutaneous bruises. Unfortunately, existing research on noninvasive methods to date bruises has been mostly limited to relatively small, homogeneous samples or cross-sectional designs. The purpose of this prospective, foundational study was to examine change in bruise colorimetry over time and evaluate the effects of bruise size, skin color, gender, and local subcutaneous fat on that change. Bruises were created by a controlled application of a paintball pellet to 103 adult, healthy volunteers. Daily colorimetry measures were obtained for four consecutive days using the Minolta Chroma-meter(®). The sample was nearly equal by gender and skin color (light, medium, dark). Analysis included general linear mixed modeling (GLMM). Change in bruise colorimetry over time was significant for all three color parameters (L*a*b*), the most notable changes being the decrease in red (a*) and increase in yellow (b*) starting at 24 h. Skin color was a significant predictor for all three colorimetry values but sex or subcutaneous fat levels were not. Bruise size was a significant predictor and moderator and may have accounted for the lack of effect of gender or subcutaneous fat. Study results demonstrated the ability to model the change in bruise colorimetry over time in a diverse sample of healthy adults. Multiple factors, including skin color and bruise size must be considered when assessing bruise color in relation to its age. This study supports the need for further research that could build the science to allow more accurate bruise age estimations.

  5. High spatial variation in population size and symbiotic performance of Rhizobium leguminosarum bv. trifolii with white clover in New Zealand pasture soils.

    PubMed

    Wakelin, Steven; Tillard, Guyléne; van Ham, Robert; Ballard, Ross; Farquharson, Elizabeth; Gerard, Emily; Geurts, Rene; Brown, Matthew; Ridgway, Hayley; O'Callaghan, Maureen

    2018-01-01

    Biological nitrogen fixation through the legume-rhizobia symbiosis is important for sustainable pastoral production. In New Zealand, the most widespread and valuable symbiosis occurs between white clover (Trifolium repens L.) and Rhizobium leguminosarum bv. trifolii (Rlt). As variation in the population size (determined by most probable number assays; MPN) and effectiveness of N-fixation (symbiotic potential; SP) of Rlt in soils may affect white clover performance, the extent in variation in these properties was examined at three different spatial scales: (1) From 26 sites across New Zealand, (2) at farm-wide scale, and (3) within single fields. Overall, Rlt populations ranged from 95 to >1 x 108 per g soil, with variation similar at the three spatial scales assessed. For almost all samples, there was no relationship between rhizobia population size and ability of the population to fix N during legume symbiosis (SP). When compared with the commercial inoculant strain, the SP of soils ranged between 14 to 143% efficacy. The N-fixing ability of rhizobia populations varied more between samples collected from within a single hill country field (0.8 ha) than between 26 samples collected from diverse locations across New Zealand. Correlations between SP and calcium and aluminium content were found in all sites, except within a dairy farm field. Given the general lack of association between SP and MPN, and high spatial variability of SP at single field scale, provision of advice for treating legume seed with rhizobia based on field-average MPN counts needs to be carefully considered.

  6. High spatial variation in population size and symbiotic performance of Rhizobium leguminosarum bv. trifolii with white clover in New Zealand pasture soils

    PubMed Central

    Tillard, Guyléne; van Ham, Robert; Ballard, Ross; Farquharson, Elizabeth; Gerard, Emily; Geurts, Rene; Brown, Matthew; Ridgway, Hayley; O’Callaghan, Maureen

    2018-01-01

    Biological nitrogen fixation through the legume-rhizobia symbiosis is important for sustainable pastoral production. In New Zealand, the most widespread and valuable symbiosis occurs between white clover (Trifolium repens L.) and Rhizobium leguminosarum bv. trifolii (Rlt). As variation in the population size (determined by most probable number assays; MPN) and effectiveness of N-fixation (symbiotic potential; SP) of Rlt in soils may affect white clover performance, the extent in variation in these properties was examined at three different spatial scales: (1) From 26 sites across New Zealand, (2) at farm-wide scale, and (3) within single fields. Overall, Rlt populations ranged from 95 to >1 x 108 per g soil, with variation similar at the three spatial scales assessed. For almost all samples, there was no relationship between rhizobia population size and ability of the population to fix N during legume symbiosis (SP). When compared with the commercial inoculant strain, the SP of soils ranged between 14 to 143% efficacy. The N-fixing ability of rhizobia populations varied more between samples collected from within a single hill country field (0.8 ha) than between 26 samples collected from diverse locations across New Zealand. Correlations between SP and calcium and aluminium content were found in all sites, except within a dairy farm field. Given the general lack of association between SP and MPN, and high spatial variability of SP at single field scale, provision of advice for treating legume seed with rhizobia based on field-average MPN counts needs to be carefully considered. PMID:29489845

  7. Fungal Fragments in Moldy Houses: A Field Study in Homes in New Orleans and Southern Ohio

    PubMed Central

    Reponen, Tiina; Seo, Sung-Chul; Grimsley, Faye; Lee, Taekhee; Crawford, Carlos; Grinshpun, Sergey A.

    2007-01-01

    Smaller-sized fungal fragments (<1 μm) may contribute to mold-related health effects. Previous laboratory-based studies have shown that the number concentration of fungal fragments can be up to 500 times higher than that of fungal spores, but this has not yet been confirmed in a field study due to lack of suitable methodology. We have recently developed a field-compatible method for the sampling and analysis of airborne fungal fragments. The new methodology was utilized for characterizing fungal fragment exposures in mold-contaminated homes selected in New Orleans, Louisiana and Southern Ohio. Airborne fungal particles were separated into three distinct size fractions: (i) >2.25 μm (spores); (ii) 1.05–2.25 μm (mixture); and (iii) < 1.0 μm (submicrometer-sized fragments). Samples were collected in five homes in summer and winter and analyzed for (1→3)-β-D-glucan. The total (1→3)-β-D-glucan varied from 0.2 to 16.0 ng m−3. The ratio of (1→3)-β-D-glucan mass in fragment size fraction to that in spore size fraction (F/S) varied from 0.011 to 2.163. The mass ratio was higher in winter (average = 1.017) than in summer (0.227) coinciding with a lower relative humidity in the winter. Assuming a mass-based F/S-ratio=1 and the spore size = 3 μm, the corresponding number-based F/S-ratio (fragment number/spore number) would be 103 and 106, for the fragment sizes of 0.3 and 0.03 μm, respectively. These results indicate that the actual (field) contribution of fungal fragments to the overall exposure may be very high, even much greater than that estimated in our earlier laboratory-based studies. PMID:19050738

  8. Efficacy and safety of Suanzaoren decoction for primary insomnia: a systematic review of randomized controlled trials

    PubMed Central

    2013-01-01

    Background Insomnia is a widespread human health problem, but there currently are the limitations of conventional therapies available. Suanzaoren decoction (SZRD) is a well known classic Chinese herbal prescription for insomnia and has been treating people’s insomnia for more than thousand years. The objective of this study was to evaluate the efficacy and safety of SZRD for insomnia. Methods A systematic literature search was performed for 6 databases up to July of 2012 to identify randomized control trials (RCTs) involving SZRD for insomniac patients. The methodological quality of RCTs was assessed independently using the Cochrane Handbook for Systematic Reviews of Interventions. Results Twelve RCTs with total of 1376 adult participants were identified. The methodological quality of all included trials are no more than 3/8 score. Majority of the RCTs concluded that SZRD was more significantly effective than benzodiazepines for treating insomnia. Despite these positive outcomes, there were many methodological shortcomings in the studies reviewed, including insufficient information about randomization generation and absence of allocation concealment, lack of blinding and no placebo control, absence of intention-to-treat analysis and lack of follow-ups, selective publishing and reporting, and small number of sample sizes. A number of clinical heterogeneity such as diagnosis, intervention, control, and outcome measures were also reviewed. Only 3 trials reported adverse events, whereas the other 9 trials did not provide the safety information. Conclusions Despite the apparent reported positive findings, there is insufficient evidence to support efficacy of SZRD for insomnia due to the poor methodological quality and the small number of trials of the included studies. SZRD seems generally safe, but is insufficient evidence to make conclusions on the safety because fewer studies reported the adverse events. Further large sample-size and well-designed RCTs are needed. PMID:23336848

  9. Differential Item Functioning (DIF) among Spanish-Speaking English Language Learners (ELLs) in State Science Tests

    NASA Astrophysics Data System (ADS)

    Ilich, Maria O.

    Psychometricians and test developers evaluate standardized tests for potential bias against groups of test-takers by using differential item functioning (DIF). English language learners (ELLs) are a diverse group of students whose native language is not English. While they are still learning the English language, they must take their standardized tests for their school subjects, including science, in English. In this study, linguistic complexity was examined as a possible source of DIF that may result in test scores that confound science knowledge with a lack of English proficiency among ELLs. Two years of fifth-grade state science tests were analyzed for evidence of DIF using two DIF methods, Simultaneous Item Bias Test (SIBTest) and logistic regression. The tests presented a unique challenge in that the test items were grouped together into testlets---groups of items referring to a scientific scenario to measure knowledge of different science content or skills. Very large samples of 10, 256 students in 2006 and 13,571 students in 2007 were examined. Half of each sample was composed of Spanish-speaking ELLs; the balance was comprised of native English speakers. The two DIF methods were in agreement about the items that favored non-ELLs and the items that favored ELLs. Logistic regression effect sizes were all negligible, while SIBTest flagged items with low to high DIF. A decrease in socioeconomic status and Spanish-speaking ELL diversity may have led to inconsistent SIBTest effect sizes for items used in both testing years. The DIF results for the testlets suggested that ELLs lacked sufficient opportunity to learn science content. The DIF results further suggest that those constructed response test items requiring the student to draw a conclusion about a scientific investigation or to plan a new investigation tended to favor ELLs.

  10. C-Sphere Strength-Size Scaling in a Bearing-Grade Silicon Nitride

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wereszczak, Andrew A; Jadaan, Osama M.; Kirkland, Timothy Philip

    2008-01-01

    A C-sphere specimen geometry was used to determine the failure strength distributions of a commercially available bearing-grade silicon nitride (Si3N4) having ball diameters of 12.7 and 25.4 mm. Strengths for both diameters were determined using the combination of failure load, C sphere geometry, and finite element analysis and fitted using two-parameter Weibull distributions. Effective areas of both diameters were estimated as a function of Weibull modulus and used to explore whether the strength distributions predictably strength-scaled between each size. They did not. That statistical observation suggested that the same flaw type did not limit the strength of both ball diametersmore » indicating a lack of material homogeneity between the two sizes. Optical fractography confirmed that. It showed there were two distinct strength-limiting flaw types in both ball diameters, that one flaw type was always associated with lower strength specimens, and that significantly higher fraction of the 24.5-mm-diameter c-sphere specimens failed from it. Predictable strength-size-scaling would therefore not result as a consequence of this because these flaw types were not homogenously distributed and sampled in both c-sphere geometries.« less

  11. Microsatellite genetic distances between oceanic populations of the humpback whale (Megaptera novaeangliae).

    PubMed

    Valsecchi, E; Palsbøll, P; Hale, P; Glockner-Ferrari, D; Ferrari, M; Clapham, P; Larsen, F; Mattila, D; Sears, R; Sigurjonsson, J; Brown, M; Corkeron, P; Amos, B

    1997-04-01

    Mitochondrial DNA haplotypes of humpback whales show strong segregation between oceanic populations and between feeding grounds within oceans, but this highly structured pattern does not exclude the possibility of extensive nuclear gene flow. Here we present allele frequency data for four microsatellite loci typed across samples from four major oceanic regions: the North Atlantic (two mitochondrially distinct populations), the North Pacific, and two widely separated Antarctic regions, East Australia and the Antarctic Peninsula. Allelic diversity is a little greater in the two Antarctic samples, probably indicating historically greater population sizes. Population subdivision was examined using a wide range of measures, including Fst, various alternative forms of Slatkin's Rst, Goldstein and colleagues' delta mu, and a Monte Carlo approximation to Fisher's exact test. The exact test revealed significant heterogeneity in all but one of the pairwise comparisons between geographically adjacent populations, including the comparison between the two North Atlantic populations, suggesting that gene flow between oceans is minimal and that dispersal patterns may sometimes be restricted even in the absence of obvious barriers, such as land masses, warm water belts, and antitropical migration behavior. The only comparison where heterogeneity was not detected was the one between the two Antarctic population samples. It is unclear whether failure to find a difference here reflects gene flow between the regions or merely lack of statistical power arising from the small size of the Antarctic Peninsula sample. Our comparison between measures of population subdivision revealed major discrepancies between methods, with little agreement about which populations were most and least separated. We suggest that unbiased Rst (URst, see Goodman 1995) is currently the most reliable statistic, probably because, unlike the other methods, it allows for unequal sample sizes. However, in view of the fact that these alternative measures often contradict one another, we urge caution in the use of microsatellite data to quantify genetic distance.

  12. A novel synthesis of a new thorium (IV) metal organic framework nanostructure with well controllable procedure through ultrasound assisted reverse micelle method.

    PubMed

    Sargazi, Ghasem; Afzali, Daryoush; Mostafavi, Ali

    2018-03-01

    Reverse micelle (RM) and ultrasound assisted reverse micelle (UARM) were applied to the synthesis of novel thorium nanostructures as metal organic frameworks (MOFs). Characterization with different techniques showed that the Th-MOF sample synthesized by UARM method had higher thermal stability (354°C), smaller mean particle size (27nm), and larger surface area (2.02×10 3 m 2 /g). Besides, in this novel approach, the nucleation of crystals was found to carry out in a shorter time. The synthesis parameters of UARM method were designed by 2 k-1 factorial and the process control was systematically studied using analysis of variance (ANOVA) and response surface methodology (RSM). ANOVA showed that various factors, including surfactant content, ultrasound duration, temperature, ultrasound power, and interaction between these factors, considerably affected different properties of the Th-MOF samples. According to the 2 k-1 factorial design, the determination coefficient (R 2 ) of the model is 0.999, with no significant lack of fit. The F value of 5432, implied that the model was highly significant and adequate to represent the relationship between the responses and the independent variables, also the large R-adjusted value indicates a good relationship between the experimental data and the fitted model. RSM predicted that it would be possible to produce Th-MOF samples with the thermal stability of 407°C, mean particle size of 13nm, and surface area of 2.20×10 3 m 2 /g. The mechanism controlling the Th-MOF properties was considerably different from the conventional mechanisms. Moreover, the MOF sample synthesized using UARM exhibited higher capacity for nitrogen adsorption as a result of larger pore sizes. It is believed that the UARM method and systematic studies developed in the present work can be considered as a new strategy for their application in other nanoscale MOF samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Failure to Report Effect Sizes: The Handling of Quantitative Results in Published Health Education and Behavior Research.

    PubMed

    Barry, Adam E; Szucs, Leigh E; Reyes, Jovanni V; Ji, Qian; Wilson, Kelly L; Thompson, Bruce

    2016-10-01

    Given the American Psychological Association's strong recommendation to always report effect sizes in research, scholars have a responsibility to provide complete information regarding their findings. The purposes of this study were to (a) determine the frequencies with which different effect sizes were reported in published, peer-reviewed articles in health education, promotion, and behavior journals and (b) discuss implications for reporting effect size in social science research. Across a 4-year time period (2010-2013), 1,950 peer-reviewed published articles were examined from the following six health education and behavior journals: American Journal of Health Behavior, American Journal of Health Promotion, Health Education & Behavior, Health Education Research, Journal of American College Health, and Journal of School Health Quantitative features from eligible manuscripts were documented using Qualtrics online survey software. Of the 1,245 articles in the final sample that reported quantitative data analyses, approximately 47.9% (n = 597) of the articles reported an effect size. While 16 unique types of effect size were reported across all included journals, many of the effect sizes were reported with little frequency across most journals. Overall, odds ratio/adjusted odds ratio (n = 340, 50.1%), Pearson r/r(2) (n = 162, 23.8%), and eta squared/partial eta squared (n = 46, 7.2%) accounted for the most frequently used effect size. Quality research practice requires both testing statistical significance and reporting effect size. However, our study shows that a substantial portion of published literature in health education and behavior lacks consistent reporting of effect size. © 2016 Society for Public Health Education.

  14. DNA vaccination with a plasmid encoding LACK-TSA fusion against Leishmania major infection in BALB/c mice.

    PubMed

    Maspi, N; Ghaffarifar, F; Sharifi, Z; Dalimi, A; Khademi, S Z

    2017-12-01

    Vaccination would be the most important strategy for the prevention and elimination of leishmaniasis. The aim of the present study was to compare the immune responses induced following DNA vaccination with LACK (Leishmania analogue of the receptor kinase C), TSA (Thiol-specific-antioxidant) genes alone or LACK-TSA fusion against cutaneous leishmaniasis (CL). Cellular and humoral immune responses were evaluated before and after challenge with Leishmania major (L. major). In addition, the mean lesion size was also measured from 3th week post-infection. All immunized mice showed a partial immunity characterized by higher interferon (IFN)-γ and Immunoglobulin G (IgG2a) levels compared to control groups (p<0.05). IFN-γ/ Interleukin (IL)-4 and IgG2a/IgG1 ratios demonstrated the highest IFN-γ and IgG2a levels in the group receiving LACK-TSA fusion. Mean lesion sizes reduced significantly in all immunized mice compared with control groups at 7th week post-infection (p<0.05). In addition, there was a significant reduction in mean lesion size of LACK-TSA and TSA groups than LACK group after challenge (p<0.05). In the present study, DNA immunization promoted Th1 immune response and confirmed the previous observations on immunogenicity of LACK and TSA antigens against CL. Furthermore, this study demonstrated that a bivalent vaccine can induce stronger immune responses and protection against infectious challenge with L. major.

  15. Pandoraviruses: amoeba viruses with genomes up to 2.5 Mb reaching that of parasitic eukaryotes.

    PubMed

    Philippe, Nadège; Legendre, Matthieu; Doutre, Gabriel; Couté, Yohann; Poirot, Olivier; Lescot, Magali; Arslan, Defne; Seltzer, Virginie; Bertaux, Lionel; Bruley, Christophe; Garin, Jérome; Claverie, Jean-Michel; Abergel, Chantal

    2013-07-19

    Ten years ago, the discovery of Mimivirus, a virus infecting Acanthamoeba, initiated a reappraisal of the upper limits of the viral world, both in terms of particle size (>0.7 micrometers) and genome complexity (>1000 genes), dimensions typical of parasitic bacteria. The diversity of these giant viruses (the Megaviridae) was assessed by sampling a variety of aquatic environments and their associated sediments worldwide. We report the isolation of two giant viruses, one off the coast of central Chile, the other from a freshwater pond near Melbourne (Australia), without morphological or genomic resemblance to any previously defined virus families. Their micrometer-sized ovoid particles contain DNA genomes of at least 2.5 and 1.9 megabases, respectively. These viruses are the first members of the proposed "Pandoravirus" genus, a term reflecting their lack of similarity with previously described microorganisms and the surprises expected from their future study.

  16. Hot super-Earths stripped by their host stars.

    PubMed

    Lundkvist, M S; Kjeldsen, H; Albrecht, S; Davies, G R; Basu, S; Huber, D; Justesen, A B; Karoff, C; Silva Aguirre, V; Van Eylen, V; Vang, C; Arentoft, T; Barclay, T; Bedding, T R; Campante, T L; Chaplin, W J; Christensen-Dalsgaard, J; Elsworth, Y P; Gilliland, R L; Handberg, R; Hekker, S; Kawaler, S D; Lund, M N; Metcalfe, T S; Miglio, A; Rowe, J F; Stello, D; Tingley, B; White, T R

    2016-04-11

    Simulations predict that hot super-Earth sized exoplanets can have their envelopes stripped by photoevaporation, which would present itself as a lack of these exoplanets. However, this absence in the exoplanet population has escaped a firm detection. Here we demonstrate, using asteroseismology on a sample of exoplanets and exoplanet candidates observed during the Kepler mission that, while there is an abundance of super-Earth sized exoplanets with low incident fluxes, none are found with high incident fluxes. We do not find any exoplanets with radii between 2.2 and 3.8 Earth radii with incident flux above 650 times the incident flux on Earth. This gap in the population of exoplanets is explained by evaporation of volatile elements and thus supports the predictions. The confirmation of a hot-super-Earth desert caused by evaporation will add an important constraint on simulations of planetary systems, since they must be able to reproduce the dearth of close-in super-Earths.

  17. A portrait of a sucker using landscape genetics: how colonization and life history undermine the idealized dendritic metapopulation.

    PubMed

    Salisbury, Sarah J; McCracken, Gregory R; Keefe, Donald; Perry, Robert; Ruzzante, Daniel E

    2016-09-01

    Dendritic metapopulations have been attributed unique properties by in silico studies, including an elevated genetic diversity relative to a panmictic population of equal total size. These predictions have not been rigorously tested in nature, nor has there been full consideration of the interacting effects among contemporary landscape features, colonization history and life history traits of the target species. We tested for the effects of dendritic structure as well as the relative importance of life history, environmental barriers and historical colonization on the neutral genetic structure of a longnose sucker (Catostomus catostomus) metapopulation in the Kogaluk watershed of northern Labrador, Canada. Samples were collected from eight lakes, genotyped with 17 microsatellites, and aged using opercula. Lakes varied in differentiation, historical and contemporary connectivity, and life history traits. Isolation by distance was detected only by removing two highly genetically differentiated lakes, suggesting a lack of migration-drift equilibrium and the lingering influence of historical factors on genetic structure. Bayesian analyses supported colonization via the Kogaluk's headwaters. The historical concentration of genetic diversity in headwaters inferred by this result was supported by high historical and contemporary effective sizes of the headwater lake, T-Bone. Alternatively, reduced allelic richness in headwaters confirmed the dendritic structure's influence on gene flow, but this did not translate to an elevated metapopulation effective size. A lack of equilibrium and upstream migration may have dampened the effects of dendritic structure. We suggest that interacting historical and contemporary factors prevent the achievement of the idealized traits of a dendritic metapopulation in nature. © 2016 John Wiley & Sons Ltd.

  18. Analysis of the chemical composition of ultrafine particles from two domestic solid biomass fired room heaters under simulated real-world use

    NASA Astrophysics Data System (ADS)

    Ozgen, Senem; Becagli, Silvia; Bernardoni, Vera; Caserini, Stefano; Caruso, Donatella; Corbella, Lorenza; Dell'Acqua, Manuela; Fermo, Paola; Gonzalez, Raquel; Lonati, Giovanni; Signorini, Stefano; Tardivo, Ruggero; Tosi, Elisa; Valli, Gianluigi; Vecchi, Roberta; Marinovich, Marina

    2017-02-01

    Two common types of wood (beech and fir) were burned in commercial pellet (11.1 kW) and wood (8.2 kW) stoves following a combustion cycle simulating the behavior of a real-world user. Ultrafine particulate matter (UFP, dp < 100 nm) was sampled with three parallel multistage impactors and analyzed for metals, main water soluble ions, anhydrosugars, total carbon, and PAH content. The measurement of the number concentration and size distribution was also performed by a fourth multistage impactor. UFP mass emission factors averaged to 424 mg/kgfuel for all the tested stove and wood type (fir, beech) combinations except for beech log burning in the wood stove (838 mg/kgfuel). Compositional differences were observed for pellets and wood UFP samples, where high TC levels characterize the wood log combustion and potassium salts are dominant in every pellet sample. Crucial aspects determining the UFP composition in the wood stove experiments are critical situations in terms of available oxygen (a lack or an excess of combustion air) and high temperatures. Whereas for the automatically controlled pellets stove local situations (e.g., hindered air-fuel mixing due to heaps of pellets on the burner pot) determine the emission levels and composition. Wood samples contain more potentially carcinogenic PAHs with respect to pellets samples. Some diagnostic ratios related to PAH isomers and anhydrosugars compiled from experimental UFP data in the present study and compared to literature values proposed for the emission source discrimination for atmospheric aerosol, extend the evaluation usually limited to higher particle size fractions also to UFP.

  19. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  20. A Noninvasive Method For In situ Determination of Mating Success in Female American Lobsters (Homarus americanus)

    PubMed Central

    Goldstein, Jason S; Pugh, Tracy L; Dubofsky, Elizabeth A; Lavalli, Kari L; Clancy, Michael; Watson, Winsor H

    2014-01-01

    Despite being one of the most productive fisheries in the Northwest Atlantic, much remains unknown about the natural reproductive dynamics of American lobsters. Recent work in exploited crustacean populations (crabs and lobsters) suggests that there are circumstances where mature females are unable to achieve their full reproductive potential due to sperm limitation. To examine this possibility in different regions of the American lobster fishery, a reliable and noninvasive method was developed for sampling large numbers of female lobsters at sea. This method involves inserting a blunt-tipped needle into the female's seminal receptacle to determine the presence or absence of a sperm plug and to withdraw a sample that can be examined for the presence of sperm. A series of control studies were conducted at the dock and in the laboratory to test the reliability of this technique. These efforts entailed sampling 294 female lobsters to confirm that the presence of a sperm plug was a reliable indicator of sperm within the receptacle and thus, mating. This paper details the methodology and the results obtained from a subset of the total females sampled. Of the 230 female lobsters sampled from George's Bank and Cape Ann, MA (size range = 71-145 mm in carapace length), 90.3% were positive for sperm. Potential explanations for the absence of sperm in some females include: immaturity (lack of physiological maturity), breakdown of the sperm plug after being used to fertilize a clutch of eggs, and lack of mating activity. The surveys indicate that this technique for examining the mating success of female lobsters is a reliable proxy that can be used in the field to document reproductive activity in natural populations. PMID:24561702

  1. A noninvasive method for in situ determination of mating success in female American lobsters (Homarus americanus).

    PubMed

    Goldstein, Jason S; Pugh, Tracy L; Dubofsky, Elizabeth A; Lavalli, Kari L; Clancy, Michael; Watson, Winsor H

    2014-02-07

    Despite being one of the most productive fisheries in the Northwest Atlantic, much remains unknown about the natural reproductive dynamics of American lobsters. Recent work in exploited crustacean populations (crabs and lobsters) suggests that there are circumstances where mature females are unable to achieve their full reproductive potential due to sperm limitation. To examine this possibility in different regions of the American lobster fishery, a reliable and noninvasive method was developed for sampling large numbers of female lobsters at sea. This method involves inserting a blunt-tipped needle into the female's seminal receptacle to determine the presence or absence of a sperm plug and to withdraw a sample that can be examined for the presence of sperm. A series of control studies were conducted at the dock and in the laboratory to test the reliability of this technique. These efforts entailed sampling 294 female lobsters to confirm that the presence of a sperm plug was a reliable indicator of sperm within the receptacle and thus, mating. This paper details the methodology and the results obtained from a subset of the total females sampled. Of the 230 female lobsters sampled from George's Bank and Cape Ann, MA (size range = 71-145 mm in carapace length), 90.3% were positive for sperm. Potential explanations for the absence of sperm in some females include: immaturity (lack of physiological maturity), breakdown of the sperm plug after being used to fertilize a clutch of eggs, and lack of mating activity. The surveys indicate that this technique for examining the mating success of female lobsters is a reliable proxy that can be used in the field to document reproductive activity in natural populations.

  2. Thermal conductivity measurements of particulate materials under Martian conditions

    NASA Technical Reports Server (NTRS)

    Presley, M. A.; Christensen, P. R.

    1993-01-01

    The mean particle diameter of surficial units on Mars has been approximated by applying thermal inertia determinations from the Mariner 9 Infrared Radiometer and the Viking Infrared Thermal Mapper data together with thermal conductivity measurement. Several studies have used this approximation to characterize surficial units and infer their nature and possible origin. Such interpretations are possible because previous measurements of the thermal conductivity of particulate materials have shown that particle size significantly affects thermal conductivity under martian atmospheric pressures. The transfer of thermal energy due to collisions of gas molecules is the predominant mechanism of thermal conductivity in porous systems for gas pressures above about 0.01 torr. At martian atmospheric pressures the mean free path of the gas molecules becomes greater than the effective distance over which conduction takes place between the particles. Gas particles are then more likely to collide with the solid particles than they are with each other. The average heat transfer distance between particles, which is related to particle size, shape and packing, thus determines how fast heat will flow through a particulate material.The derived one-to-one correspondence of thermal inertia to mean particle diameter implies a certain homogeneity in the materials analyzed. Yet the samples used were often characterized by fairly wide ranges of particle sizes with little information about the possible distribution of sizes within those ranges. Interpretation of thermal inertia data is further limited by the lack of data on other effects on the interparticle spacing relative to particle size, such as particle shape, bimodal or polymodal mixtures of grain sizes and formation of salt cements between grains. To address these limitations and to provide a more comprehensive set of thermal conductivities vs. particle size a linear heat source apparatus, similar to that of Cremers, was assembled to provide a means of measuring the thermal conductivity of particulate samples. In order to concentrate on the dependence of the thermal conductivity on particle size, initial runs will use spherical glass beads that are precision sieved into relatively small size ranges and thoroughly washed.

  3. Brain Tumor Epidemiology: Consensus from the Brain Tumor Epidemiology Consortium (BTEC)

    PubMed Central

    Bondy, Melissa L.; Scheurer, Michael E.; Malmer, Beatrice; Barnholtz-Sloan, Jill S.; Davis, Faith G.; Il’yasova, Dora; Kruchko, Carol; McCarthy, Bridget J.; Rajaraman, Preetha; Schwartzbaum, Judith A.; Sadetzki, Siegal; Schlehofer, Brigitte; Tihan, Tarik; Wiemels, Joseph L.; Wrensch, Margaret; Buffler, Patricia A.

    2010-01-01

    Epidemiologists in the Brain Tumor Epidemiology Consortium (BTEC) have prioritized areas for further research. Although many risk factors have been examined over the past several decades, there are few consistent findings possibly due to small sample sizes in individual studies and differences between studies in subjects, tumor types, and methods of classification. Individual studies have generally lacked sufficient sample size to examine interactions. A major priority based on available evidence and technologies includes expanding research in genetics and molecular epidemiology of brain tumors. BTEC has taken an active role in promoting understudied groups such as pediatric brain tumors, the etiology of rare glioma subtypes, such as oligodendroglioma, and meningioma, which not uncommon, has only recently been systematically registered in the US. There is also a pressing need to bring more researchers, especially junior investigators, to study brain tumor epidemiology. However, relatively poor funding for brain tumor research has made it difficult to encourage careers in this area. We review the group’s consensus on the current state of scientific findings and present a consensus on research priorities to identify the important areas the science should move to address. PMID:18798534

  4. A critical look at national monitoring programs for birds and other wildlife species

    USGS Publications Warehouse

    Sauer, J.R.; O'Shea, T.J.; Bogon, M.A.

    2003-01-01

    Concerns?about declines in numerous taxa have created agreat deal of interest in survey development. Because birds have traditionally been monitored by a variety of methods, bird surveys form natural models for development of surveys for other taxa. Here I suggest that most bird surveys are not appropriate models for survey design. Most lack important design components associated with estimation of population parameters at sample sites or with sampling over space, leading to estimates that may be biased, I discuss the limitations of national bird monitoring programs designed to monitor population size. Although these surveys are often analyzed, careful consideration must be given to factors that may bias estimates but that cannot be evaluated within the survey. Bird surveys with appropriate designs have generally been developed as part of management programs that have specific information needs. Experiences gained from bird surveys provide important information for development of surveys for other taxa, and statistical developments in estimation of population sizes from counts provide new approaches to overcoming the limitations evident in many bird surveys. Design of surveys is a collaborative effort, requiring input from biologists, statisticians, and the managers who will use the information from the surveys.

  5. In-house validation of a method for determination of silver nanoparticles in chicken meat based on asymmetric flow field-flow fractionation and inductively coupled plasma mass spectrometric detection.

    PubMed

    Loeschner, Katrin; Navratilova, Jana; Grombe, Ringo; Linsinger, Thomas P J; Købler, Carsten; Mølhave, Kristian; Larsen, Erik H

    2015-08-15

    Nanomaterials are increasingly used in food production and packaging, and validated methods for detection of nanoparticles (NPs) in foodstuffs need to be developed both for regulatory purposes and product development. Asymmetric flow field-flow fractionation with inductively coupled plasma mass spectrometric detection (AF(4)-ICP-MS) was applied for quantitative analysis of silver nanoparticles (AgNPs) in a chicken meat matrix following enzymatic sample preparation. For the first time an analytical validation of nanoparticle detection in a food matrix by AF(4)-ICP-MS has been carried out and the results showed repeatable and intermediately reproducible determination of AgNP mass fraction and size. The findings demonstrated the potential of AF(4)-ICP-MS for quantitative analysis of NPs in complex food matrices for use in food monitoring and control. The accurate determination of AgNP size distribution remained challenging due to the lack of certified size standards. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Modular integration of electronics and microfluidic systems using flexible printed circuit boards.

    PubMed

    Wu, Amy; Wang, Lisen; Jensen, Erik; Mathies, Richard; Boser, Bernhard

    2010-02-21

    Microfluidic systems offer an attractive alternative to conventional wet chemical methods with benefits including reduced sample and reagent volumes, shorter reaction times, high-throughput, automation, and low cost. However, most present microfluidic systems rely on external means to analyze reaction products. This substantially adds to the size, complexity, and cost of the overall system. Electronic detection based on sub-millimetre size integrated circuits (ICs) has been demonstrated for a wide range of targets including nucleic and amino acids, but deployment of this technology to date has been limited due to the lack of a flexible process to integrate these chips within microfluidic devices. This paper presents a modular and inexpensive process to integrate ICs with microfluidic systems based on standard printed circuit board (PCB) technology to assemble the independently designed microfluidic and electronic components. The integrated system can accommodate multiple chips of different sizes bonded to glass or PDMS microfluidic systems. Since IC chips and flex PCB manufacturing and assembly are industry standards with low cost, the integrated system is economical for both laboratory and point-of-care settings.

  7. Age differences in the use of serving size information on food labels: numeracy or attention?

    PubMed

    Miller, Lisa M Soederberg; Applegate, Elizabeth; Beckett, Laurel A; Wilson, Machelle D; Gibson, Tanja N

    2017-04-01

    The ability to use serving size information on food labels is important for managing age-related chronic conditions such as diabetes, obesity and cancer. Past research suggests that older adults are at risk for failing to accurately use this portion of the food label due to numeracy skills. However, the extent to which older adults pay attention to serving size information on packages is unclear. We compared the effects of numeracy and attention on age differences in accurate use of serving size information while individuals evaluated product healthfulness. Accuracy and attention were assessed across two tasks in which participants compared nutrition labels of two products to determine which was more healthful if they were to consume the entire package. Participants' eye movements were monitored as a measure of attention while they compared two products presented side-by-side on a computer screen. Numeracy as well as food label habits and nutrition knowledge were assessed using questionnaires. Sacramento area, California, USA, 2013-2014. Stratified sample of 358 adults, aged 20-78 years. Accuracy declined with age among those older adults who paid less attention to serving size information. Although numeracy, nutrition knowledge and self-reported food label use supported accuracy, these factors did not influence age differences in accuracy. The data suggest that older adults are less accurate than younger adults in their use of serving size information. Age differences appear to be more related to lack of attention to serving size information than to numeracy skills.

  8. Mock juror sampling issues in jury simulation research: A meta-analysis.

    PubMed

    Bornstein, Brian H; Golding, Jonathan M; Neuschatz, Jeffrey; Kimbrough, Christopher; Reed, Krystia; Magyarics, Casey; Luecht, Katherine

    2017-02-01

    The advantages and disadvantages of jury simulation research have often been debated in the literature. Critics chiefly argue that jury simulations lack verisimilitude, particularly through their use of student mock jurors, and that this limits the generalizabilty of the findings. In the present article, the question of sample differences (student v. nonstudent) in jury research was meta-analyzed for 6 dependent variables: 3 criminal (guilty verdicts, culpability, and sentencing) and 3 civil (liability verdicts, continuous liability, and damages). In total, 53 studies (N = 17,716) were included in the analysis (40 criminal and 13 civil). The results revealed that guilty verdicts, culpability ratings, and damage awards did not vary with sample. Furthermore, the variables that revealed significant or marginally significant differences, sentencing and liability judgments, had small or contradictory effect sizes (e.g., effects on dichotomous and continuous liability judgments were in opposite directions). In addition, with the exception of trial presentation medium, moderator effects were small and inconsistent. These results may help to alleviate concerns regarding the use of student samples in jury simulation research. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Baseline heart rate, sensation seeking, and aggression in young adult women: a two-sample examination.

    PubMed

    Wilson, Laura C; Scarpa, Angela

    2013-01-01

    Although substantial literature discusses sensation seeking as playing a role in the relationship between baseline heart rate and aggression, few published studies have tested the relationships among these variables. Furthermore, most prior studies have focused on risk factors of aggression in men and have largely ignored this issue in women. Two samples (n = 104; n = 99) of young adult women completed measures of resting heart rate, sensation seeking, and aggression. Across the two samples of females there was no evidence for the relationships of baseline heart rate with sensation seeking or with aggression that has been consistently shown in males. Boredom susceptibility and disinhibition subscales of sensation seeking were consistently significantly correlated with aggression. The lack of significance and the small effect sizes indicate that other mechanisms are also at work in affecting aggression in young adult women. Finally, it is important to consider the type of sensation seeking in relation to aggression, as only boredom susceptibility and disinhibition were consistently replicated across samples. © 2013 Wiley Periodicals, Inc.

  10. The Healthy Immigrant Paradox and Child Maltreatment: A Systematic Review.

    PubMed

    Millett, Lina S

    2016-10-01

    Prior studies suggest that foreign-born individuals have a health advantage, referred to as the Healthy Immigrant Paradox, when compared to native-born persons of the same socio-economic status. This systematic review examined whether the immigrant advantage found in health literature is mirrored by child maltreatment in general and its forms in particular. The author searched Academic Search Premier, CINAHL, CINAHL PLUS, Family and Society Studies Worldwide, MEDLINE, PsychINFO, Social Work Abstracts, and SocINdex for published literature through December 2015. The review followed an evidence-based Preferred Reporting Items for Systematic reviews and Meta-Analyses checklist. The author identified 822 unique articles, of which 19 met the inclusion criteria. The reviewed data showed strong support for the healthy immigrant paradox for a general form of maltreatment and physical abuse. The evidence for emotional and sexual abuse was also suggestive of immigrant advantage though relatively small sample size and lack of multivariate controls make these findings tentative. The evidence for neglect was mixed: immigrants were less likely to be reported to Child Protective Services; however, they had higher rates of physical neglect and lack of supervision in the community data. The study results warrant confirmation with newer data possessing strong external validity for immigrant samples.

  11. Influence of riffle characteristics, surficial geology, and natural barriers on the distribution of the channel darter, Percina copelandi, in the Lake Ontario basin

    USGS Publications Warehouse

    Reid, S.M.; Carl, L.M.; Lean, J.

    2005-01-01

    The channel darter, Percina copelandi, is a small benthic fish with a wide but disjunct distribution across central North America. The development of conservation and recovery strategies for Canadian populations is limited by a lack of knowledge regarding ecology, population size and other factors that affect its distribution and abundance. We sampled five rivers in the Lake Ontario basin to test whether the distribution of P. copelandi reflected riffle habitat characteristics or landscape-scale factors such as surficial geology and natural barriers (waterfalls). At most sites yielding P. copelandi, riffles flowed into deep sand bottomed run or pool habitats. Despite a lack of association with local surficial geology or riffle habitat characteristics, both the upstream limits of P. copelandi occurrence and distribution of suitable habitats reflected the distribution of waterfalls, chutes and bedrock outcroppings. In contrast to P. copelandi, distributions of Etheostoma flabellare, P. caprodes and Rhinichthys cataractae reflected among site differences in riffle habitat. ?? Springer 2005.

  12. Severity of traumatic injuries predicting psychological outcomes: A surprising lack of empirical evidence.

    PubMed

    Boals, Adriel; Trost, Zina; Rainey, Evan; Foreman, Michael L; Warren, Ann Marie

    2017-08-01

    Despite widespread beliefs that trauma severity is related to levels of posttraumatic stress symptoms (PTSS), the empirical evidence to support such beliefs is lacking. In the current study we examined Injury Severity Score (ISS), a medical measure of event severity for physical injuries, in a sample of 460 patients admitted to a Level 1 Trauma Center. Results revealed no significant relationship between ISS and PTSS, depression, pain, and general physical and mental health at baseline, three months, and six months post-injury. However, at 12 months post-injury, ISS significantly predicted depression, pain, and physical health, but was unrelated to PTSS. The effect sizes of these relationships were small and would not remain significant if any adjustments for multiple comparisons were employed. We conclude that the relationship between ISS and PTSS is, at best, weak and inconsistent. The results are discussed in the broader picture of event severity and psychological outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  14. Evidence for plant-derived xenomiRs based on a large-scale analysis of public small RNA sequencing data from human samples.

    PubMed

    Zhao, Qi; Liu, Yuanning; Zhang, Ning; Hu, Menghan; Zhang, Hao; Joshi, Trupti; Xu, Dong

    2018-01-01

    In recent years, an increasing number of studies have reported the presence of plant miRNAs in human samples, which resulted in a hypothesis asserting the existence of plant-derived exogenous microRNA (xenomiR). However, this hypothesis is not widely accepted in the scientific community due to possible sample contamination and the small sample size with lack of rigorous statistical analysis. This study provides a systematic statistical test that can validate (or invalidate) the plant-derived xenomiR hypothesis by analyzing 388 small RNA sequencing data from human samples in 11 types of body fluids/tissues. A total of 166 types of plant miRNAs were found in at least one human sample, of which 14 plant miRNAs represented more than 80% of the total plant miRNAs abundance in human samples. Plant miRNA profiles were characterized to be tissue-specific in different human samples. Meanwhile, the plant miRNAs identified from microbiome have an insignificant abundance compared to those from humans, while plant miRNA profiles in human samples were significantly different from those in plants, suggesting that sample contamination is an unlikely reason for all the plant miRNAs detected in human samples. This study also provides a set of testable synthetic miRNAs with isotopes that can be detected in situ after being fed to animals.

  15. Estimating snow leopard population abundance using photography and capture-recapture techniques

    USGS Publications Warehouse

    Jackson, R.M.; Roe, J.D.; Wangchuk, R.; Hunter, D.O.

    2006-01-01

    Conservation and management of snow leopards (Uncia uncia) has largely relied on anecdotal evidence and presence-absence data due to their cryptic nature and the difficult terrain they inhabit. These methods generally lack the scientific rigor necessary to accurately estimate population size and monitor trends. We evaluated the use of photography in capture-mark-recapture (CMR) techniques for estimating snow leopard population abundance and density within Hemis National Park, Ladakh, India. We placed infrared camera traps along actively used travel paths, scent-sprayed rocks, and scrape sites within 16- to 30-km2 sampling grids in successive winters during January and March 2003-2004. We used head-on, oblique, and side-view camera configurations to obtain snow leopard photographs at varying body orientations. We calculated snow leopard abundance estimates using the program CAPTURE. We obtained a total of 66 and 49 snow leopard captures resulting in 8.91 and 5.63 individuals per 100 trap-nights during 2003 and 2004, respectively. We identified snow leopards based on the distinct pelage patterns located primarily on the forelimbs, flanks, and dorsal surface of the tail. Capture probabilities ranged from 0.33 to 0.67. Density estimates ranged from 8.49 (SE = 0.22; individuals per 100 km2 in 2003 to 4.45 (SE = 0.16) in 2004. We believe the density disparity between years is attributable to different trap density and placement rather than to an actual decline in population size. Our results suggest that photographic capture-mark-recapture sampling may be a useful tool for monitoring demographic patterns. However, we believe a larger sample size would be necessary for generating a statistically robust estimate of population density and abundance based on CMR models.

  16. Methodological reporting of randomized clinical trials in respiratory research in 2010.

    PubMed

    Lu, Yi; Yao, Qiuju; Gu, Jie; Shen, Ce

    2013-09-01

    Although randomized controlled trials (RCTs) are considered the highest level of evidence, they are also subject to bias, due to a lack of adequately reported randomization, and therefore the reporting should be as explicit as possible for readers to determine the significance of the contents. We evaluated the methodological quality of RCTs in respiratory research in high ranking clinical journals, published in 2010. We assessed the methodological quality, including generation of the allocation sequence, allocation concealment, double-blinding, sample-size calculation, intention-to-treat analysis, flow diagrams, number of medical centers involved, diseases, funding sources, types of interventions, trial registration, number of times the papers have been cited, journal impact factor, journal type, and journal endorsement of the CONSORT (Consolidated Standards of Reporting Trials) rules, in RCTs published in 12 top ranking clinical respiratory journals and 5 top ranking general medical journals. We included 176 trials, of which 93 (53%) reported adequate generation of the allocation sequence, 66 (38%) reported adequate allocation concealment, 79 (45%) were double-blind, 123 (70%) reported adequate sample-size calculation, 88 (50%) reported intention-to-treat analysis, and 122 (69%) included a flow diagram. Multivariate logistic regression analysis revealed that journal impact factor ≥ 5 was the only variable that significantly influenced adequate allocation sequence generation. Trial registration and journal impact factor ≥ 5 significantly influenced adequate allocation concealment. Medical interventions, trial registration, and journal endorsement of the CONSORT statement influenced adequate double-blinding. Publication in one of the general medical journal influenced adequate sample-size calculation. The methodological quality of RCTs in respiratory research needs improvement. Stricter enforcement of the CONSORT statement should enhance the quality of RCTs.

  17. Intraspecific variability in the life histories of endemic coral-reef fishes between photic and mesophotic depths across the Central Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Winston, M. S.; Taylor, B. M.; Franklin, E. C.

    2017-06-01

    Mesophotic coral ecosystems (MCEs) represent the lowest depth distribution inhabited by many coral reef-associated organisms. Research on fishes associated with MCEs is sparse, leading to a critical lack of knowledge of how reef fish found at mesophotic depths may vary from their shallow reef conspecifics. We investigated intraspecific variability in body condition and growth of three Hawaiian endemics collected from shallow, photic reefs (5-33 m deep) and MCEs (40-75 m) throughout the Hawaiian Archipelago and Johnston Atoll: the detritivorous goldring surgeonfish, Ctenochaetus strigosus, and the planktivorous threespot chromis, Chromis verater, and Hawaiian dascyllus, Dascyllus albisella. Estimates of body condition and size-at-age varied between shallow and mesophotic depths; however, these demographic differences were outweighed by the magnitude of variability found across the latitudinal gradient of locations sampled within the Central Pacific. Body condition and maximum body size were lowest in samples collected from shallow and mesophotic Johnston Atoll sites, with no difference occurring between depths. Samples from the Northwestern Hawaiian Islands tended to have the highest body condition and reached the largest body sizes, with differences between shallow and mesophotic sites highly variable among species. The findings of this study support newly emerging research demonstrating intraspecific variability in the life history of coral-reef fish species whose distributions span shallow and mesophotic reefs. This suggests not only that the conservation and fisheries management should take into consideration differences in the life histories of reef-fish populations across spatial scales, but also that information derived from studies of shallow fishes be applied with caution to conspecific populations in mesophotic coral environments.

  18. Assessing the Application of a Geographic Presence-Only Model for Land Suitability Mapping

    PubMed Central

    Heumann, Benjamin W.; Walsh, Stephen J.; McDaniel, Phillip M.

    2011-01-01

    Recent advances in ecological modeling have focused on novel methods for characterizing the environment that use presence-only data and machine-learning algorithms to predict the likelihood of species occurrence. These novel methods may have great potential for land suitability applications in the developing world where detailed land cover information is often unavailable or incomplete. This paper assesses the adaptation and application of the presence-only geographic species distribution model, MaxEnt, for agricultural crop suitability mapping in a rural Thailand where lowland paddy rice and upland field crops predominant. To assess this modeling approach, three independent crop presence datasets were used including a social-demographic survey of farm households, a remote sensing classification of land use/land cover, and ground control points, used for geodetic and thematic reference that vary in their geographic distribution and sample size. Disparate environmental data were integrated to characterize environmental settings across Nang Rong District, a region of approximately 1,300 sq. km in size. Results indicate that the MaxEnt model is capable of modeling crop suitability for upland and lowland crops, including rice varieties, although model results varied between datasets due to the high sensitivity of the model to the distribution of observed crop locations in geographic and environmental space. Accuracy assessments indicate that model outcomes were influenced by the sample size and the distribution of sample points in geographic and environmental space. The need for further research into accuracy assessments of presence-only models lacking true absence data is discussed. We conclude that the Maxent model can provide good estimates of crop suitability, but many areas need to be carefully scrutinized including geographic distribution of input data and assessment methods to ensure realistic modeling results. PMID:21860606

  19. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  20. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  1. When general practitioners don't feel appreciated by their patients: prospective effects on well-being and work-family conflict in a Swiss longitudinal study.

    PubMed

    Meier, Laurenz L; Tschudi, Peter; Meier, Cornelia A; Dvorak, Charles; Zeller, Andreas

    2015-04-01

    Impaired well-being and high work-family conflict are critical issues among GPs. This research examined an understudied psychosocial risk factor for these outcomes, namely GPs' perception that they invest more in the relationship with their patients than what they receive in return (i.e. lack of reward in their relationship with patients). To test the effect of lack of reward as a risk factor for poor well-being and work-family conflict among GPs. Longitudinal study (12 months time lag). 272 GPs in Switzerland [mean age 54.5 (SD = 8.3), 73% male] volunteered to participate in the study. 270 participants completed the baseline survey and 252 completed the follow-up survey. Of these, six retired between the baseline and the follow-up survey, resulting in a sample size of 246 participants at t2. Outcome measures were burnout, sleep problems, self-perceived health and work-family conflict. Strength and direction of prospective effects were tested using cross-lagged models. Lack of reward was related to an increase in emotional exhaustion (β = 0.15), sleep problems (β = 0.16) and work-family conflict (β = 0.19) and a decrease in self-perceived health (β = -0.17). Effects on depersonalization and personal accomplishment were not significant. Regarding reversed effects of impaired well-being on lack of reward, emotional exhaustion (β = 0.14) and self-perceived health (β = -0.13) predicted future level of lack of reward. Lack of reward by patients is a risk factor in GPs' mental health. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  3. Weapon Possession Among College Students: A Study From a Midwestern University.

    PubMed

    Jang, Hyunseok; Kang, Ji Hyon; Dierenfeldt, Rick; Lindsteadt, Greg

    2015-10-01

    Weapon possession on college campuses causes great concern, but there remains a lack of research examining the determinants of this phenomenon. Previous studies addressing weapon possession have primarily focused on either K-12 or the general adult population. Unlike previous studies, this study examined the weapon possession among college students using data collected from a mid-sized university in Missouri, and 451 students participated. Weapon possession and other theoretical factors were measured through the self-administered survey. Logistical regression analysis revealed that weapon socialization was the most significant factor in predicting student weapon carrying. Also, gender and age were significant factors in explaining campus-based weapon possession. This research has a limitation with generalizability because the data were collected from only a single university with convenient sampling. Future studies need to cover a wider range of college students from a variety of different universities with random sampling. © The Author(s) 2014.

  4. Lead isotope data bank; 2,624 samples and analyses cited

    USGS Publications Warehouse

    Doe, Bruce R.

    1976-01-01

    The Lead Isotope Data Bank (LIDB) was initiated to facilitate plotting data. Therefore, the Bank reflects data most often used in plotting rather than comprises a comprehensive tabulation of lead isotope data. Up until now, plotting was done using card decks processed by computer with tapes plotted by a Gerber plotter and more recently a CRT using a batch mode. Lack of a uniform format for sample identification was not a great impediment. With increase in the size of the bank, hand sorting is becoming prohibitive and ·plans are underway to put the bank into a uniform format on DISK with a card backup so that it may be accessed by use of IRIS on the DECK 10 computer at the U.S.G.S. facility in Denver. Plots will be constructed on a CRT. Entry of the bank into the IRIS accessing program is scheduled for completion in FY 1976

  5. The effect of childhood trauma on spatial cognition in adults: a possible role of sex.

    PubMed

    Syal, Supriya; Ipser, Jonathan; Phillips, Nicole; Thomas, Kevin G F; van der Honk, Jack; Stein, Dan J

    2014-06-01

    Although animal evidence indicates that early life trauma results in pervasive hippocampal deficits underlying spatial and cognitive impairment, visuo-spatial data from adult humans with early childhood adversity are lacking. We administered 4 tests of visuo-spatial ability from the Cambridge Neuorpsychological Test Automated Battery (CANTAB) to adults with a history of childhood trauma (measured by the Childhood Trauma Questionnaire) and a matched sample of healthy controls (trauma/control = 27/28). We observed a significant effect of trauma history on spatial/pattern learning. These effects could not be accounted for by adverse adult experiences, and were sex-specific, with prior adversity improving performance in men but worsening performance in women, relative to controls. Limitations include the small sample size and reliance of our study design on a retrospective, self report measure. Our results suggest that early adversity can lead to specific and pervasive deficits in adult cognitive function.

  6. Sequencing of the large dsDNA genome of Oryctes rhinoceros nudivirus using multiple displacement amplification of nanogram amounts of virus DNA.

    PubMed

    Wang, Yongjie; Kleespies, Regina G; Ramle, Moslim B; Jehle, Johannes A

    2008-09-01

    The genomic sequence analysis of many large dsDNA viruses is hampered by the lack of enough sample materials. Here, we report a whole genome amplification of the Oryctes rhinoceros nudivirus (OrNV) isolate Ma07 starting from as few as about 10 ng of purified viral DNA by application of phi29 DNA polymerase- and exonuclease-resistant random hexamer-based multiple displacement amplification (MDA) method. About 60 microg of high molecular weight DNA with fragment sizes of up to 25 kbp was amplified. A genomic DNA clone library was generated using the product DNA. After 8-fold sequencing coverage, the 127,615 bp of OrNV whole genome was sequenced successfully. The results demonstrate that the MDA-based whole genome amplification enables rapid access to genomic information from exiguous virus samples.

  7. High-silica glass inclusions in olivine of Luna-24 samples

    NASA Technical Reports Server (NTRS)

    Roedder, E.; Weiblen, P. W.

    1977-01-01

    Optical examination of nine polished grain mounts of Luna-24 drill-core material (0.09-0.50 mm size) revealed melt inclusions in olivine crystals. Two inclusions consist of clear glass with exceptionally high Si, yet contain no visible daughter minerals and have had no reaction effects with the olivine walls. Their compositions (one has SiO2 93.8, Al2O3 1.51, FeO 2.32, MgO 1.61, CaO 0.06, Na2O less than 0.05, K2O 0.11, total 99.41%; the other is similar) are unique and quite unlike the high-Si high-K melt of granitic composition that is found as inclusions in late-stage minerals of these (and the Apollo) samples, from silicate liquid immiscibility. The host olivines are Fo73 and Fo51. The origin of the melt in the inclusions and the lack of reaction effects are perplexing unsolved problems.

  8. A bacterial method for the nitrogen isotopic analysis of nitrate in seawater and freshwater

    USGS Publications Warehouse

    Sigman, D.M.; Casciotti, K.L.; Andreani, M.; Barford, C.; Galanter, M.; Böhlke, J.K.

    2001-01-01

    We report a new method for measurement of the isotopic composition of nitrate (NO3-) at the natural-abundance level in both seawater and freshwater. The method is based on the isotopic analysis of nitrous oxide (N2O) generated from nitrate by denitrifying bacteria that lack N2O-reductase activity. The isotopic composition of both nitrogen and oxygen from nitrate are accessible in this way. In this first of two companion manuscripts, we describe the basic protocol and results for the nitrogen isotopes. The precision of the method is better than 0.2‰ (1 SD) at concentrations of nitrate down to 1 μM, and the nitrogen isotopic differences among various standards and samples are accurately reproduced. For samples with 1 μM nitrate or more, the blank of the method is less than 10% of the signal size, and various approaches may reduce it further.

  9. [Human milk for neonatal pain relief during ophthalmoscopy].

    PubMed

    Ribeiro, Laiane Medeiros; Castral, Thaíla Corrêa; Montanholi, Liciane Langona; Daré, Mariana Firmino; Silva, Aline Carolina de Araújo; Antonini, Sonir Roberto Rauber; Scochi, Carmen Gracinda Silvan

    2013-10-01

    Ophthalmoscopy performed for the early diagnosis of retinopathy of prematurity (ROP) is painful for preterm infants, thus necessitating interventions for minimizing pain. The present study aimed to establish the effectiveness of human milk, compared with sucrose, for pain relief in premature infants subjected to ophthalmoscopy for the early diagnosis of ROP. This investigation was a pilot, quasi-experimental study conducted with 14 premature infants admitted to the neonatal intensive care unit (NICU) of a university hospital. Comparison between the groups did not yield a statistically significant difference relative to the crying time, salivary cortisol, or heart rate (HR). Human milk appears to be as effective as sucrose in relieving acute pain associated with ophthalmoscopy. The study's limitations included its small sample size and lack of randomization. Experimental investigations with greater sample power should be performed to reinforce the evidence found in the present study.

  10. Interparticle interaction effects on magnetic behaviors of hematite (α-Fe2O3) nanoparticles

    NASA Astrophysics Data System (ADS)

    Can, Musa Mutlu; Fırat, Tezer; Özcan, Şadan

    2011-07-01

    The interparticle magnetic interactions of hematite (α-Fe2O3) nanoparticles were investigated by temperature and magnetic field dependent magnetization curves. The synthesis were done in two steps; milling metallic iron (Fe) powders in pure water (H2O), known as mechanical milling technique, and annealing at 600 °C. The crystal and molecular structure of prepared samples were determined by X-ray powder diffraction (XRD) spectra and Fourier transform infrared (FTIR) spectra results. The average particle sizes and the size distributions were figured out using transmission electron microscopy (TEM) and scanning electron microscopy (SEM). The magnetic behaviors of α-Fe2O3 nanoparticles were analyzed with a vibrating sample magnetometer (VSM). As a result of the analysis, it was observed that the prepared α-Fe2O3 nanoparticles did not perform a sharp Morin transition (the characteristic transition of α-Fe2O3) due to lack of unique particle size distribution. However, the transition can be observed in the wide temperature range as “a continuously transition”. Additionally, the effect of interparticle interaction on magnetic behavior was determined from the magnetization versus applied field (σ(M)) curves for 26±2 nm particles, dispersed in sodium oxalate matrix under ratios of 200:1, 300:1, 500:1 and 1000:1. The interparticle interaction fields, recorded at 5 K to avoid the thermal interactions, were found as ∼1082 Oe for 26±2 nm particles.

  11. The species-area relationship, self-similarity, and the true meaning of the z-value.

    PubMed

    Tjørve, Even; Tjørve, Kathleen M Calf

    2008-12-01

    The power model, S= cA(z) (where S is number of species, A is area, and c and z are fitted constants), is the model most commonly fitted to species-area data assessing species diversity. We use the self-similarity properties of this model to reveal patterns implicated by the z parameter. We present the basic arithmetic leading both to the fraction of new species added when two areas are combined and to species overlap between two areas of the same size, given a continuous sampling scheme. The fraction of new species resulting from expansion of an area can be expressed as alpha(z)-1, where alpha is the expansion factor. Consequently, z-values can be converted to a scale-invariant species overlap between two equally sized areas, since the proportion of species in common between the two areas is 2-2(z). Calculating overlap when adding areas of the same size reveals the intrinsic effect of distance assumed by the bisectional scheme. We use overlap area relationships from empirical data sets to illustrate how answers to the single large or several small reserves (SLOSS) question vary between data sets and with scale. We conclude that species overlap and the effect of distance between sample areas or isolates should be addressed when discussing species area relationships, and lack of fit to the power model can be caused by its assumption of a scale-invariant overlap relationship.

  12. Why it is hard to find genes associated with social science traits: theoretical and empirical considerations.

    PubMed

    Chabris, Christopher F; Lee, James J; Benjamin, Daniel J; Beauchamp, Jonathan P; Glaeser, Edward L; Borst, Gregoire; Pinker, Steven; Laibson, David I

    2013-10-01

    We explain why traits of interest to behavioral scientists may have a genetic architecture featuring hundreds or thousands of loci with tiny individual effects rather than a few with large effects and why such an architecture makes it difficult to find robust associations between traits and genes. We conducted a genome-wide association study at 2 sites, Harvard University and Union College, measuring more than 100 physical and behavioral traits with a sample size typical of candidate gene studies. We evaluated predictions that alleles with large effect sizes would be rare and most traits of interest to social science are likely characterized by a lack of strong directional selection. We also carried out a theoretical analysis of the genetic architecture of traits based on R.A. Fisher's geometric model of natural selection and empirical analyses of the effects of selection bias and phenotype measurement stability on the results of genetic association studies. Although we replicated several known genetic associations with physical traits, we found only 2 associations with behavioral traits that met the nominal genome-wide significance threshold, indicating that physical and behavioral traits are mainly affected by numerous genes with small effects. The challenge for social science genomics is the likelihood that genes are connected to behavioral variation by lengthy, nonlinear, interactive causal chains, and unraveling these chains requires allying with personal genomics to take advantage of the potential for large sample sizes as well as continuing with traditional epidemiological studies.

  13. High-resolution Antibody Array Analysis of Childhood Acute Leukemia Cells*

    PubMed Central

    Kanderova, Veronika; Kuzilkova, Daniela; Stuchly, Jan; Vaskova, Martina; Brdicka, Tomas; Fiser, Karel; Hrusak, Ondrej; Lund-Johansen, Fridtjof

    2016-01-01

    Acute leukemia is a disease pathologically manifested at both genomic and proteomic levels. Molecular genetic technologies are currently widely used in clinical research. In contrast, sensitive and high-throughput proteomic techniques for performing protein analyses in patient samples are still lacking. Here, we used a technology based on size exclusion chromatography followed by immunoprecipitation of target proteins with an antibody bead array (Size Exclusion Chromatography-Microsphere-based Affinity Proteomics, SEC-MAP) to detect hundreds of proteins from a single sample. In addition, we developed semi-automatic bioinformatics tools to adapt this technology for high-content proteomic screening of pediatric acute leukemia patients. To confirm the utility of SEC-MAP in leukemia immunophenotyping, we tested 31 leukemia diagnostic markers in parallel by SEC-MAP and flow cytometry. We identified 28 antibodies suitable for both techniques. Eighteen of them provided excellent quantitative correlation between SEC-MAP and flow cytometry (p < 0.05). Next, SEC-MAP was applied to examine 57 diagnostic samples from patients with acute leukemia. In this assay, we used 632 different antibodies and detected 501 targets. Of those, 47 targets were differentially expressed between at least two of the three acute leukemia subgroups. The CD markers correlated with immunophenotypic categories as expected. From non-CD markers, we found DBN1, PAX5, or PTK2 overexpressed in B-cell precursor acute lymphoblastic leukemias, LAT, SH2D1A, or STAT5A overexpressed in T-cell acute lymphoblastic leukemias, and HCK, GLUD1, or SYK overexpressed in acute myeloid leukemias. In addition, OPAL1 overexpression corresponded to ETV6-RUNX1 chromosomal translocation. In summary, we demonstrated that SEC-MAP technology is a powerful tool for detecting hundreds of proteins in clinical samples obtained from pediatric acute leukemia patients. It provides information about protein size and reveals differences in protein expression between particular leukemia subgroups. Forty-seven of SEC-MAP identified targets were validated by other conventional method in this study. PMID:26785729

  14. Photo-oxidation products of α-pinene in coarse, fine and ultrafine aerosol: A new high sensitive HPLC-MS/MS method

    NASA Astrophysics Data System (ADS)

    Feltracco, Matteo; Barbaro, Elena; Contini, Daniele; Zangrando, Roberta; Toscano, Giuseppa; Battistel, Dario; Barbante, Carlo; Gambaro, Andrea

    2018-05-01

    Oxidation products of α-pinene represent a fraction of organic matter in the environmental aerosol. α-pinene is one of most abundant monoterpenes released in the atmosphere by plants, located typically in boreal, temperate and tropical forests. This primary compound reacts with atmospheric oxidants, such as O3, O2, OH radicals and NOx, through the major tropospheric degradation pathway for many monoterpenes under typical atmospheric condition. Although several studies identified a series of by-products deriving from the α-pinene photo-oxidation in the atmosphere, such as pinic and cis-pinonic acid, the knowledge of the mechanism of this process is partially still lacking. Thus, the investigation of the distribution of these acids in the different size aerosol particles provides additional information on this regard. The aim of this study is twofold. First, we aim to improve the existing analytical methods for the determination of pinic and cis-pinonic acid in aerosol samples, especially in terms of analytical sensitivity and limits of detection (LOD) and quantification (LOQ). We even attempted to increase the knowledge of the α-pinene photo-oxidation processes by analysing, for the first time, the particle-size distribution up to nanoparticle level of pinic and cis-pinonic acid. The analysis of aerosol samples was carried out via high-performance liquid chromatography coupled to a triple quadrupole mass spectrometer. The instrumental LOD values of cis-pinonic and pinic acid are 1.6 and 1.2 ng L-1 while LOQ values are 5.4 and 4.1 ng L-1, respectively. Samples were collected by MOUDI II™ cascade impactor with twelve cut-sizes, from March to May 2016 in the urban area of Mestre-Venice (Italy). The range concentrations in the aerosol samples were from 0.1 to 0.9 ng m-3 for cis-pinonic acid and from 0.1 to 0.8 ng m-3 for pinic acid.

  15. The LDCE Particle Impact Experiment as flown on STS-46. [limited duration space environment candidate materials exposure (LDCE)

    NASA Technical Reports Server (NTRS)

    Maag, Carl R.; Tanner, William G.; Borg, Janet; Bibring, Jean-Pierre; Alexander, W. Merle; Maag, Andrew J.

    1992-01-01

    Many materials and techniques have been developed by the authors to sample the flux of particles in Low Earth Orbit (LEO). Though regular in-site sampling of the flux in LEO the materials and techniques have produced data which compliment the data now being amassed by the Long Duration Exposure Facility (LDEF) research activities. Orbital debris models have not been able to describe the flux of particles with d sub p less than or = 0.05 cm, because of the lack of data. Even though LDEF will provide a much needed baseline flux measurement, the continuous monitoring of micron and sub-micron size particles must be carried out. A flight experiment was conducted on the Space Shuttle as part of the LDCE payload to develop an understanding of the Spatial Density (concentration) as a function of size (mass) for particle sizes 1 x 10(exp 6) cm and larger. In addition to the enumeration of particle impacts, it is the intent of the experiment that hypervelocity particles be captured and returned intact. Measurements will be performed post flight to determine the flux density, diameters, and subsequent effects on various optical, thermal control and structural materials. In addition to these principal measurements, the Particle Impact Experiment (PIE) also provides a structure and sample holders for the exposure of passive material samples to the space environment, e.g., thermal cycling, and atomic oxygen, etc. The experiment will measure the optical property changes of mirrors and will provide the fluence of the ambient atomic oxygen environment to other payload experimenters. In order to augment the amount of material returned in a form which can be analyzed, the survivability of the experiment as well as the captured particles will be assessed. Using Sandia National Laboratory's hydrodynamic computer code CTH, hypervelocity impacts on the materials which comprise the experiments have been investigated and the progress of these studies are reported.

  16. PHOTOMICROGRAPH - SPHERE FRAGMENTS - "ORANGE" SOIL - APOLLO 17 - MSC

    NASA Image and Video Library

    1973-01-04

    S73-15171 (4 Jan. 1973) --- These orange glass spheres and fragments are the finest particles ever brought back from the moon. Ranging in size from 20 to 45 microns (about 1/1000 of an inch) the particles are magnified 160 times in this photomicrograph made in the Lunar Receiving Laboratory at the Manned Spacecraft Center. The orange soil was brought back from the Taurus-Littrow landing site by the Apollo 17 crewmen. Scientist-astronaut Harrison H. "Jack" Schmitt discovered the orange soil at Shorty Crater during the second Apollo 17 extravehicular activity (EVA). This lunar material is being studied and analyzed by scientists in the LRL. The orange particles in this photomicrograph, which are intermixed with black and black-speckled grains, are about the same size as the particles that compose silt on Earth. Chemical analysis of the orange soil material has shown the sample to be similar to some of the samples brought back from the Apollo 11 (Sea of Tranquility) site several hundred miles to the southwest. Like those samples, it is rich in titanium (8%) and iron oxide (22%). But unlike the Apollo 11 samples, the orange soil is unexplainably rich in zinc ? an anomaly that has scientists in a quandary. This Apollo 17 sample is not high in volatile elements, nor do the minerals contain substantial amounts of water. These would have provided strong evidence of volcanic activity. On the other hand, the lack of agglutinates (rocks made up of a variety of minerals cemented together) indicates that the orange glass is probably not the product of meteorite impact -- strengthening the argument that the glass was produced by volcanic activity.

  17. Detectability in Audio-Visual Surveys of Tropical Rainforest Birds: The Influence of Species, Weather and Habitat Characteristics.

    PubMed

    Anderson, Alexander S; Marques, Tiago A; Shoo, Luke P; Williams, Stephen E

    2015-01-01

    Indices of relative abundance do not control for variation in detectability, which can bias density estimates such that ecological processes are difficult to infer. Distance sampling methods can be used to correct for detectability, but in rainforest, where dense vegetation and diverse assemblages complicate sampling, information is lacking about factors affecting their application. Rare species present an additional challenge, as data may be too sparse to fit detection functions. We present analyses of distance sampling data collected for a diverse tropical rainforest bird assemblage across broad elevational and latitudinal gradients in North Queensland, Australia. Using audio and visual detections, we assessed the influence of various factors on Effective Strip Width (ESW), an intuitively useful parameter, since it can be used to calculate an estimate of density from count data. Body size and species exerted the most important influence on ESW, with larger species detectable over greater distances than smaller species. Secondarily, wet weather and high shrub density decreased ESW for most species. ESW for several species also differed between summer and winter, possibly due to seasonal differences in calling behavior. Distance sampling proved logistically intensive in these environments, but large differences in ESW between species confirmed the need to correct for detection probability to obtain accurate density estimates. Our results suggest an evidence-based approach to controlling for factors influencing detectability, and avenues for further work including modeling detectability as a function of species characteristics such as body size and call characteristics. Such models may be useful in developing a calibration for non-distance sampling data and for estimating detectability of rare species.

  18. Detectability in Audio-Visual Surveys of Tropical Rainforest Birds: The Influence of Species, Weather and Habitat Characteristics

    PubMed Central

    Anderson, Alexander S.; Marques, Tiago A.; Shoo, Luke P.; Williams, Stephen E.

    2015-01-01

    Indices of relative abundance do not control for variation in detectability, which can bias density estimates such that ecological processes are difficult to infer. Distance sampling methods can be used to correct for detectability, but in rainforest, where dense vegetation and diverse assemblages complicate sampling, information is lacking about factors affecting their application. Rare species present an additional challenge, as data may be too sparse to fit detection functions. We present analyses of distance sampling data collected for a diverse tropical rainforest bird assemblage across broad elevational and latitudinal gradients in North Queensland, Australia. Using audio and visual detections, we assessed the influence of various factors on Effective Strip Width (ESW), an intuitively useful parameter, since it can be used to calculate an estimate of density from count data. Body size and species exerted the most important influence on ESW, with larger species detectable over greater distances than smaller species. Secondarily, wet weather and high shrub density decreased ESW for most species. ESW for several species also differed between summer and winter, possibly due to seasonal differences in calling behavior. Distance sampling proved logistically intensive in these environments, but large differences in ESW between species confirmed the need to correct for detection probability to obtain accurate density estimates. Our results suggest an evidence-based approach to controlling for factors influencing detectability, and avenues for further work including modeling detectability as a function of species characteristics such as body size and call characteristics. Such models may be useful in developing a calibration for non-distance sampling data and for estimating detectability of rare species. PMID:26110433

  19. Prediction of skull fracture risk for children 0-9 months old through validated parametric finite element model and cadaver test reconstruction.

    PubMed

    Li, Zhigang; Liu, Weiguo; Zhang, Jinhuan; Hu, Jingwen

    2015-09-01

    Skull fracture is one of the most common pediatric traumas. However, injury assessment tools for predicting pediatric skull fracture risk is not well established mainly due to the lack of cadaver tests. Weber conducted 50 pediatric cadaver drop tests for forensic research on child abuse in the mid-1980s (Experimental studies of skull fractures in infants, Z Rechtsmed. 92: 87-94, 1984; Biomechanical fragility of the infant skull, Z Rechtsmed. 94: 93-101, 1985). To our knowledge, these studies contained the largest sample size among pediatric cadaver tests in the literature. However, the lack of injury measurements limited their direct application in investigating pediatric skull fracture risks. In this study, 50 pediatric cadaver tests from Weber's studies were reconstructed using a parametric pediatric head finite element (FE) model which were morphed into subjects with ages, head sizes/shapes, and skull thickness values that reported in the tests. The skull fracture risk curves for infants from 0 to 9 months old were developed based on the model-predicted head injury measures through logistic regression analysis. It was found that the model-predicted stress responses in the skull (maximal von Mises stress, maximal shear stress, and maximal first principal stress) were better predictors than global kinematic-based injury measures (peak head acceleration and head injury criterion (HIC)) in predicting pediatric skull fracture. This study demonstrated the feasibility of using age- and size/shape-appropriate head FE models to predict pediatric head injuries. Such models can account for the morphological variations among the subjects, which cannot be considered by a single FE human model.

  20. Does population size affect genetic diversity? A test with sympatric lizard species.

    PubMed

    Hague, M T J; Routman, E J

    2016-01-01

    Genetic diversity is a fundamental requirement for evolution and adaptation. Nonetheless, the forces that maintain patterns of genetic variation in wild populations are not completely understood. Neutral theory posits that genetic diversity will increase with a larger effective population size and the decreasing effects of drift. However, the lack of compelling evidence for a relationship between genetic diversity and population size in comparative studies has generated some skepticism over the degree that neutral sequence evolution drives overall patterns of diversity. The goal of this study was to measure genetic diversity among sympatric populations of related lizard species that differ in population size and other ecological factors. By sampling related species from a single geographic location, we aimed to reduce nuisance variance in genetic diversity owing to species differences, for example, in mutation rates or historical biogeography. We compared populations of zebra-tailed lizards and western banded geckos, which are abundant and short-lived, to chuckwallas and desert iguanas, which are less common and long-lived. We assessed population genetic diversity at three protein-coding loci for each species. Our results were consistent with the predictions of neutral theory, as the abundant species almost always had higher levels of haplotype diversity than the less common species. Higher population genetic diversity in the abundant species is likely due to a combination of demographic factors, including larger local population sizes (and presumably effective population sizes), faster generation times and high rates of gene flow with other populations.

  1. A screening of persistent organohalogenated contaminants in hair of East Greenland polar bears.

    PubMed

    Jaspers, Veerle L B; Dietz, Rune; Sonne, Christian; Letcher, Robert J; Eens, Marcel; Neels, Hugo; Born, Erik W; Covaci, Adrian

    2010-10-15

    In this pilot study, we report on levels of persistent organohalogenated contaminants (OHCs) in hair of polar bears (Ursus maritimus) from East Greenland sampled between 1999 and 2001. To our knowledge, this is the first study on the validation of polar bear hair as a non-invasive matrix representative of concentrations and profiles in internal organs and blood plasma. Because of low sample weights (13-140mg), only major bioaccumulative OHCs were detected above the limit of quantification: five polychlorinated biphenyl (PCB) congeners (CB 99, 138, 153, 170 and 180), one polybrominated diphenyl ether (PBDE) congener (BDE 47), oxychlordane, trans-nonachlor and β-hexachlorocyclohexane. The PCB profile in hair was similar to that of internal tissues (i.e. adipose, liver, brain and blood), with CB 153 and 180 as the major congeners in all matrices. A gender difference was found for concentrations in hair relative to concentrations in internal tissues. Females (n=6) were found to display negative correlations, while males (n=5) showed positive correlations, although p-values were not found significant. These negative correlations in females may reflect seasonal OHC mobilisation from periphery adipose tissue due to, for example, lactation and fasting. The lack of significance in most correlations may be due to small sample sizes and seasonal variability of concentrations in soft tissues. Further research with larger sample weights and sizes is therefore necessary to draw more definitive conclusions on the usefulness of hair for biomonitoring OHCs in polar bears and other fur mammals. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. Iron isotope composition of particles produced by UV-femtosecond laser ablation of natural oxides, sulfides, and carbonates.

    PubMed

    d'Abzac, Francois-Xavier; Beard, Brian L; Czaja, Andrew D; Konishi, Hiromi; Schauer, James J; Johnson, Clark M

    2013-12-17

    The need for femtosecond laser ablation (fs-LA) systems coupled to MC-ICP-MS to accurately perform in situ stable isotope analyses remains an open question, because of the lack of knowledge concerning ablation-related isotopic fractionation in this regime. We report the first iron isotope analysis of size-resolved, laser-induced particles of natural magnetite, siderite, pyrrhotite, and pyrite, collected through cascade impaction, followed by analysis by solution nebulization MC-ICP-MS, as well as imaging using electron microscopy. Iron mass distributions are independent of mineralogy, and particle morphology includes both spheres and agglomerates for all ablated phases. X-ray spectroscopy shows elemental fractionation in siderite (C-rich agglomerates) and pyrrhotite/pyrite (S-rich spheres). We find an increase in (56)Fe/(54)Fe ratios of +2‰, +1.2‰, and +0.8‰ with increasing particle size for magnetite, siderite, and pyrrhotite, respectively. Fe isotope differences in size-sorted aerosols from pyrite ablation are not analytically resolvable. Experimental data are discussed using models of particles generation by Hergenröder and elemental/isotopic fractionation by Richter. We interpret the isotopic fractionation to be related to the iron condensation time scale, dependent on its saturation in the gas phase, as a function of mineral composition. Despite the isotopic variations across aerosol size fractions, total aerosol composition, as calculated from mass balance, confirms that fs-LA produces a stoichiometric sampling in terms of isotopic composition. Specifically, both elemental and isotopic fractionation are produced by particle generation processes and not by femtosecond laser-matter interactions. These results provide critical insights into the analytical requirements for laser-ablation-based stable isotope measurements of high-precision and accuracy in geological samples, including the importance of quantitative aerosol transport to the ICP.

  3. Particle mobility size spectrometers: harmonization of technical standards and data structure to facilitate high quality long-term observations of atmospheric particle number size distributions

    NASA Astrophysics Data System (ADS)

    Wiedensohler, A.; Birmili, W.; Nowak, A.; Sonntag, A.; Weinhold, K.; Merkel, M.; Wehner, B.; Tuch, T.; Pfeifer, S.; Fiebig, M.; Fjäraa, A. M.; Asmi, E.; Sellegri, K.; Depuy, R.; Venzac, H.; Villani, P.; Laj, P.; Aalto, P.; Ogren, J. A.; Swietlicki, E.; Roldin, P.; Williams, P.; Quincey, P.; Hüglin, C.; Fierz-Schmidhauser, R.; Gysel, M.; Weingartner, E.; Riccobono, F.; Santos, S.; Grüning, C.; Faloon, K.; Beddows, D.; Harrison, R. M.; Monahan, C.; Jennings, S. G.; O'Dowd, C. D.; Marinoni, A.; Horn, H.-G.; Keck, L.; Jiang, J.; Scheckman, J.; McMurry, P. H.; Deng, Z.; Zhao, C. S.; Moerman, M.; Henzing, B.; de Leeuw, G.

    2010-12-01

    Particle mobility size spectrometers often referred to as DMPS (Differential Mobility Particle Sizers) or SMPS (Scanning Mobility Particle Sizers) have found a wide application in atmospheric aerosol research. However, comparability of measurements conducted world-wide is hampered by lack of generally accepted technical standards with respect to the instrumental set-up, measurement mode, data evaluation as well as quality control. This article results from several instrument intercomparison workshops conducted within the European infrastructure project EUSAAR (European Supersites for Atmospheric Aerosol Research). Under controlled laboratory conditions, the number size distribution from 20 to 200 nm determined by mobility size spectrometers of different design are within an uncertainty range of ±10% after correcting internal particle losses, while below and above this size range the discrepancies increased. Instruments with identical design agreed within ±3% in the peak number concentration when all settings were done carefully. Technical standards were developed for a minimum requirement of mobility size spectrometry for atmospheric aerosol measurements. Technical recommendations are given for atmospheric measurements including continuous monitoring of flow rates, temperature, pressure, and relative humidity for the sheath and sample air in the differential mobility analyser. In cooperation with EMEP (European Monitoring and Evaluation Program), a new uniform data structure was introduced for saving and disseminating the data within EMEP. This structure contains three levels: raw data, processed data, and final particle size distributions. Importantly, we recommend reporting raw measurements including all relevant instrument parameters as well as a complete documentation on all data transformation and correction steps. These technical and data structure standards aim to enhance the quality of long-term size distribution measurements, their comparability between different networks and sites, and their transparency and traceability back to raw data.

  4. NREL, Bosch, and Bonneville Power Administration | Energy Systems

    Science.gov Websites

    Bonneville Power Administration Analyze Residential Energy Storage and Sizing NREL, Bosch, and Bonneville lacks sizing standards or broad application guidelines. This combined with battery lifespan uncertainty uptake. The NREL, Bosch, Bonneville partnership will establish practical guidance for sizing, use case

  5. Manufacturing work and organizational stresses in export processing zones.

    PubMed

    Lu, Jinky Leilanie

    2009-10-01

    In the light of global industrialization, much attention has been focused on occupational factors and their influence on the health and welfare of workers. This was a cross sectional study using stratified sampling technique based on industry sizes. The study sampled 24 industries, 6 were small scale industries and 9 each for medium and large scale industries. From the 24 industries, a total of 500 respondents for the questionnaire was taken. For occupational health and safety standards that industries have to comply with, there was low compliance among small-scale industries relative to the medium and large scale industries. Only one industry had an air cleaning device for cleaning contaminated air prior to emission into the external community. Among the 500 respondents, majority were female (88.8%), single (69.6%) and worked in the production or assembly-line station (87.4%). Sickness absenteeism was relative high among the workers in this study accounting for almost 54% among females and 48% among males. Many of the workers also reported of poor performance at work, boredom, tardiness and absenteeism. For association between work factors and personal factors, the following were found to be statistically significant at p=0.05. Boredom was associated with lack of skills training, lack of promotion, disincentives for sick leaves, poor relationship with boss and poor relationships with employers. On the other hand, poor performance was also associated with lack of skills training, lack of promotions, job insecurity, and poor relationship with employers. From the data generated, important issues that must be dealt with in work organizations include the quality of work life, and health and safety issues. Based on these findings, we can conclude that there are still issues on occupational health and safety (OHS) in the target site of export processing zones in the Philippines. There must be an active campaign for OHS in industries that are produce for the global market such as the target industries in this study.

  6. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  8. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  9. The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education

    ERIC Educational Resources Information Center

    Slavin, Robert; Smith, Dewi

    2009-01-01

    Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…

  10. Seven ways to increase power without increasing N.

    PubMed

    Hansen, W B; Collins, L M

    1994-01-01

    Many readers of this monograph may wonder why a chapter on statistical power was included. After all, by now the issue of statistical power is in many respects mundane. Everyone knows that statistical power is a central research consideration, and certainly most National Institute on Drug Abuse grantees or prospective grantees understand the importance of including a power analysis in research proposals. However, there is ample evidence that, in practice, prevention researchers are not paying sufficient attention to statistical power. If they were, the findings observed by Hansen (1992) in a recent review of the prevention literature would not have emerged. Hansen (1992) examined statistical power based on 46 cohorts followed longitudinally, using nonparametric assumptions given the subjects' age at posttest and the numbers of subjects. Results of this analysis indicated that, in order for a study to attain 80-percent power for detecting differences between treatment and control groups, the difference between groups at posttest would need to be at least 8 percent (in the best studies) and as much as 16 percent (in the weakest studies). In order for a study to attain 80-percent power for detecting group differences in pre-post change, 22 of the 46 cohorts would have needed relative pre-post reductions of greater than 100 percent. Thirty-three of the 46 cohorts had less than 50-percent power to detect a 50-percent relative reduction in substance use. These results are consistent with other review findings (e.g., Lipsey 1990) that have shown a similar lack of power in a broad range of research topics. Thus, it seems that, although researchers are aware of the importance of statistical power (particularly of the necessity for calculating it when proposing research), they somehow are failing to end up with adequate power in their completed studies. This chapter argues that the failure of many prevention studies to maintain adequate statistical power is due to an overemphasis on sample size (N) as the only, or even the best, way to increase statistical power. It is easy to see how this overemphasis has come about. Sample size is easy to manipulate, has the advantage of being related to power in a straight-forward way, and usually is under the direct control of the researcher, except for limitations imposed by finances or subject availability. Another option for increasing power is to increase the alpha used for hypothesis-testing but, as very few researchers seriously consider significance levels much larger than the traditional .05, this strategy seldom is used. Of course, sample size is important, and the authors of this chapter are not recommending that researchers cease choosing sample sizes carefully. Rather, they argue that researchers should not confine themselves to increasing N to enhance power. It is important to take additional measures to maintain and improve power over and above making sure the initial sample size is sufficient. The authors recommend two general strategies. One strategy involves attempting to maintain the effective initial sample size so that power is not lost needlessly. The other strategy is to take measures to maximize the third factor that determines statistical power: effect size.

  11. Phylogenetic effective sample size.

    PubMed

    Bartoszek, Krzysztof

    2016-10-21

    In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Generation and characterization of koi herpesvirus recombinants lacking viral enzymes of nucleotide metabolism.

    PubMed

    Fuchs, Walter; Fichtner, Dieter; Bergmann, Sven M; Mettenleiter, Thomas C

    2011-06-01

    Koi herpesvirus (KHV) causes a fatal disease in koi and common carp, but no reliable and genetically characterized vaccines are available up to now. Therefore, we generated KHV recombinants possessing deletions within the viral ribonucleotide reductase (RNR), thymidine kinase (TK), dUTPase, or TK and dUTPase genes, and their corresponding rescuants. All KHV mutants were replication competent in cultured cells. Whereas plaque sizes and titers of RNR-negative KHV were reduced, replication of the other mutants was not affected. Experimental infection of carp indicated attenuation of TK- or dUTPase-deleted KHV, and PCR analysis of tissue samples permitted differentiation of mutant from wild-type virus.

  13. Preliminary study of tissue concentrations of penicillin after local administration into the guttural pouches in four healthy horses.

    PubMed

    Kendall, A; Mayhew, I G; Petrovski, K

    2016-08-01

    Treatment of subclinical carriers of Streptococcus equi subsp. equi with a gelatine-penicillin formulation deposited in the guttural pouch has been empirically proposed, but data on local tissue penicillin concentrations after treatment are lacking. We analysed tissue levels of penicillin after administration into the guttural pouches of four healthy horses. Two horses received local treatment with gelatine-penicillin and two horses received local treatment with an intramammary formulation of penicillin. Tissues were harvested for analysis either 12 or 24 h later. Results indicate that local treatment may be effective, but more research on optimal drug formulations in a larger sample size is warranted. © 2016 Australian Veterinary Association.

  14. Structure and dynamics of mixed-species flocks in a Hawaiian rain forest

    USGS Publications Warehouse

    Hart, P.J.; Freed, L.A.

    2003-01-01

    Mixed-species flocks of native and introduced birds were studied for four years in an upper elevation Hawaiian rain forest. Those flocks were characterized by strong seasonality, large size, low species richness, high intraspecific abundance, a lack of migrants, and a general lack of territoriality or any sort of dominance hierarchy. There was high variability among years in patterns of occurrence at the species level, and high variability within years at the individual level. These flocks are loosely structured social groupings with apparently open membership. The fluid, unstable movement patterns, high degree of variability in size and composition, and lack of positive interspecific associations are not consistent with the “foraging enhancement” hypothesis for flocking. Two resident, endangered insectivores, the Akepa (Loxops coccineus) and Hawaii Creeper (Oreomystis mana) served as “nuclear” species. Flock composition was compared between two study sites that differed significantly in density of these two nuclear species. Flock size was similar at the two sites, primarily because the nuclear species were over-represented relative to their density. This observation suggests that birds are attempting to achieve a more optimal flock size at the lower density site.

  15. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  16. 7 CFR 993.517 - Identification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... numerical or nomenclature designation prescribed in § 993.515, which designation shall not be lacking in... additional information describing in numerical terms the average size count, or particular range of size counts, of the prunes in such lot so long as such numerical terms fall within the range of the size...

  17. 7 CFR 993.517 - Identification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... numerical or nomenclature designation prescribed in § 993.515, which designation shall not be lacking in... additional information describing in numerical terms the average size count, or particular range of size counts, of the prunes in such lot so long as such numerical terms fall within the range of the size...

  18. 7 CFR 993.517 - Identification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... numerical or nomenclature designation prescribed in § 993.515, which designation shall not be lacking in... additional information describing in numerical terms the average size count, or particular range of size counts, of the prunes in such lot so long as such numerical terms fall within the range of the size...

  19. 7 CFR 993.517 - Identification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... numerical or nomenclature designation prescribed in § 993.515, which designation shall not be lacking in... additional information describing in numerical terms the average size count, or particular range of size counts, of the prunes in such lot so long as such numerical terms fall within the range of the size...

  20. 7 CFR 993.517 - Identification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... numerical or nomenclature designation prescribed in § 993.515, which designation shall not be lacking in... additional information describing in numerical terms the average size count, or particular range of size counts, of the prunes in such lot so long as such numerical terms fall within the range of the size...

  1. Accounting for twin births in sample size calculations for randomised trials.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J

    2018-05-04

    Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.

  2. Are specialists at risk under environmental change? Neoecological, paleoecological and phylogenetic approaches

    PubMed Central

    Colles, Audrey; Liow, Lee Hsiang; Prinzing, Andreas

    2009-01-01

    The question ‘what renders a species extinction prone’ is crucial to biologists. Ecological specialization has been suggested as a major constraint impeding the response of species to environmental changes. Most neoecological studies indicate that specialists suffer declines under recent environmental changes. This was confirmed by many paleoecological studies investigating longer-term survival. However, phylogeneticists, studying the entire histories of lineages, showed that specialists are not trapped in evolutionary dead ends and could even give rise to generalists. Conclusions from these approaches diverge possibly because (i) of approach-specific biases, such as lack of standardization for sampling efforts (neoecology), lack of direct observations of specialization (paleoecology), or binary coding and prevalence of specialists (phylogenetics); (ii) neoecologists focus on habitat specialization; (iii) neoecologists focus on extinction of populations, phylogeneticists on persistence of entire clades through periods of varying extinction and speciation rates; (iv) many phylogeneticists study species in which specialization may result from a lack of constraints. We recommend integrating the three approaches by studying common datasets, and accounting for range-size variation among species, and we suggest novel hypotheses on why certain specialists may not be particularly at risk and consequently why certain generalists deserve no less attention from conservationists than specialists. PMID:19580588

  3. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  4. Public Opinion Polls, Chicken Soup and Sample Size

    ERIC Educational Resources Information Center

    Nguyen, Phung

    2005-01-01

    Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.

  5. Systematic review of the evidence for Trails B cut-off scores in assessing fitness-to-drive.

    PubMed

    Roy, Mononita; Molnar, Frank

    2013-01-01

    Fitness-to-drive guidelines recommend employing the Trail Making B Test (a.k.a. Trails B), but do not provide guidance regarding cut-off scores. There is ongoing debate regarding the optimal cut-off score on the Trails B test. The objective of this study was to address this controversy by systematically reviewing the evidence for specific Trails B cut-off scores (e.g., cut-offs in both time to completion and number of errors) with respect to fitness-to-drive. Systematic review of all prospective cohort, retrospective cohort, case-control, correlation, and cross-sectional studies reporting the ability of the Trails B to predict driving safety that were published in English-language, peer-reviewed journals. Forty-seven articles were reviewed. None of the articles justified sample sizes via formal calculations. Cut-off scores reported based on research include: 90 seconds, 133 seconds, 147 seconds, 180 seconds, and < 3 errors. There is support for the previously published Trails B cut-offs of 3 minutes or 3 errors (the '3 or 3 rule'). Major methodological limitations of this body of research were uncovered including (1) lack of justification of sample size leaving studies open to Type II error (i.e., false negative findings), and (2) excessive focus on associations rather than clinically useful cut-off scores.

  6. Branching, flowering and fruiting of Jatropha curcas treated with ethephon or benzyladenine and gibberellins.

    PubMed

    Costa, Anne P; Vendrame, Wagner; Nietsche, Sílvia; Crane, Jonathan; Moore, Kimberly; Schaffer, Bruce

    2016-05-31

    Jatropha curcas L. has been identified for biofuel production but it presents limited commercial yields due to limited branching and a lack of yield uniformity. The objective of this study was to evaluate the effects of single application of ethephon or a combination of 6-benzyladenine (BA) with gibberellic acid isomers A4 and A7 (GA4+7) on branch induction, flowering and fruit production in jatropha plants with and without leaves. Plants with and without leaves showed differences for growth and reproductive variables. For all variables except inflorescence set, there were no significant statistical interactions between the presence of leaves and plant growth regulators concentration. The total number of flowers per inflorescence was reduced as ethephon concentration was increased. As BA + GA4 +7 concentration increased, seed dry weight increased. Thus, ethephon and BA + GA4 +7 applications appeared to affect flowering and seed production to a greater extent than branching. The inability to discern significant treatment effects for most variables might have been due to the large variability within plant populations studied and thus resulting in an insufficient sample size. Therefore, data collected from this study were used for statistical estimations of sample sizes to provide a reference for future studies.

  7. Analysis of work ability and work-related physical activity of employees in a medium-sized business.

    PubMed

    Wilke, Christiane; Ashton, Philip; Elis, Tobias; Biallas, Bianca; Froböse, Ingo

    2015-12-18

    Work-related physical activity (PA) and work ability are of growing importance in modern working society. There is evidence for age- and job-related differences regarding PA and work ability. This study analyses work ability and work-related PA of employees in a medium-sized business regarding age and occupation. The total sample consists of 148 employees (116 men-78.38% of the sample-and 32 women, accounting for 21.62%; mean age: 40.85 ± 10.07 years). 100 subjects (67.57%) are white-collar workers (WC), and 48 (32.43%) are blue-collar workers (BC). Work ability is measured using the work ability index, and physical activity is obtained via the Global Physical Activity Questionnaire. Work ability shows significant differences regarding occupation (p = 0.001) but not regarding age. Further, significant differences are found for work-related PA concerning occupation (p < 0.0001), but again not for age. Overall, more than half of all subjects meet the current guidelines for physical activity. Work ability is rated as good, yet, a special focus should lie on the promotion during early and late working life. Also, there is still a lack of evidence on the level of work-related PA. Considering work-related PA could add to meeting current activity recommendations.

  8. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  9. Hot super-Earths stripped by their host stars

    PubMed Central

    Lundkvist, M. S.; Kjeldsen, H.; Albrecht, S.; Davies, G. R.; Basu, S.; Huber, D.; Justesen, A. B.; Karoff, C.; Silva Aguirre, V.; Van Eylen, V.; Vang, C.; Arentoft, T.; Barclay, T.; Bedding, T. R.; Campante, T. L.; Chaplin, W. J.; Christensen-Dalsgaard, J.; Elsworth, Y. P.; Gilliland, R. L.; Handberg, R.; Hekker, S.; Kawaler, S. D.; Lund, M. N.; Metcalfe, T. S.; Miglio, A.; Rowe, J. F.; Stello, D.; Tingley, B.; White, T. R.

    2016-01-01

    Simulations predict that hot super-Earth sized exoplanets can have their envelopes stripped by photoevaporation, which would present itself as a lack of these exoplanets. However, this absence in the exoplanet population has escaped a firm detection. Here we demonstrate, using asteroseismology on a sample of exoplanets and exoplanet candidates observed during the Kepler mission that, while there is an abundance of super-Earth sized exoplanets with low incident fluxes, none are found with high incident fluxes. We do not find any exoplanets with radii between 2.2 and 3.8 Earth radii with incident flux above 650 times the incident flux on Earth. This gap in the population of exoplanets is explained by evaporation of volatile elements and thus supports the predictions. The confirmation of a hot-super-Earth desert caused by evaporation will add an important constraint on simulations of planetary systems, since they must be able to reproduce the dearth of close-in super-Earths. PMID:27062914

  10. Cerebral cortex astroglia and the brain of a genius: A propos of A. Einstein's

    PubMed Central

    Colombo, Jorge A.; Reisin, Hernán D.; Miguel-Hidalgo, José J.; Rajkowska, Grazyna

    2010-01-01

    The glial fibrillary acidic protein immunoreactive astroglial layout of the cerebral cortex from Albert Einstein and other four age-matched human cases lacking any known neurological disease was analyzed using quantification of geometrical features mathematically defined. Several parameters (parallelism, relative depth, tortuosity) describing the primate-specific interlaminar glial processes did not show individually distinctive characteristics in any of the samples analyzed. However, A. Einstein's astrocytic processes showed larger sizes and higher numbers of interlaminar terminal masses, reaching sizes of 15 μm in diameter. These bulbous endings are of unknown significance and they have been described occurring in Alzheimer's disease. These observations are placed in the context of the general discussion regarding the proposal – by other authors – that structural, postmortem characteristics of the aged brain of Albert Einstein may serve as markers of his cognitive performance, a proposal to which the authors of this paper do not subscribe, and argue against. PMID:16675021

  11. Measuring β-diversity with species abundance data.

    PubMed

    Barwell, Louise J; Isaac, Nick J B; Kunin, William E

    2015-07-01

    In 2003, 24 presence-absence β-diversity metrics were reviewed and a number of trade-offs and redundancies identified. We present a parallel investigation into the performance of abundance-based metrics of β-diversity. β-diversity is a multi-faceted concept, central to spatial ecology. There are multiple metrics available to quantify it: the choice of metric is an important decision. We test 16 conceptual properties and two sampling properties of a β-diversity metric: metrics should be 1) independent of α-diversity and 2) cumulative along a gradient of species turnover. Similarity should be 3) probabilistic when assemblages are independently and identically distributed. Metrics should have 4) a minimum of zero and increase monotonically with the degree of 5) species turnover, 6) decoupling of species ranks and 7) evenness differences. However, complete species turnover should always generate greater values of β than extreme 8) rank shifts or 9) evenness differences. Metrics should 10) have a fixed upper limit, 11) symmetry (βA,B  = βB,A ), 12) double-zero asymmetry for double absences and double presences and 13) not decrease in a series of nested assemblages. Additionally, metrics should be independent of 14) species replication 15) the units of abundance and 16) differences in total abundance between sampling units. When samples are used to infer β-diversity, metrics should be 1) independent of sample sizes and 2) independent of unequal sample sizes. We test 29 metrics for these properties and five 'personality' properties. Thirteen metrics were outperformed or equalled across all conceptual and sampling properties. Differences in sensitivity to species' abundance lead to a performance trade-off between sample size bias and the ability to detect turnover among rare species. In general, abundance-based metrics are substantially less biased in the face of undersampling, although the presence-absence metric, βsim , performed well overall. Only βBaselga R turn , βBaselga B-C turn and βsim measured purely species turnover and were independent of nestedness. Among the other metrics, sensitivity to nestedness varied >4-fold. Our results indicate large amounts of redundancy among existing β-diversity metrics, whilst the estimation of unseen shared and unshared species is lacking and should be addressed in the design of new abundance-based metrics. © 2015 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.

  12. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  13. Heritabilities of somatotype components in a population from rural Mozambique.

    PubMed

    Saranga, Sílvio Pedro José; Prista, António; Nhantumbo, Leonardo; Beunen, Gaston; Rocha, Jorge; Williams-Blangero, Sarah; Maia, José A

    2008-01-01

    There have been few genetic studies of normal variation in body size and composition conducted in Africa. In particular, the genetic determinants of somatotype remain to be established for an African population. (1) To estimate the heritabilities of aspects of somatotype and (2) to compare the quantitative genetic effects in an African population to those that have been assessed in European and American populations. The sample composed of 329 subjects (173 males and 156 females) aged 7-17 years, belonging to 132 families. The sibships in the sample ranged in size from two to seven individuals. All sampled individuals were residents of the Calanga region, an area located to the north of Maputo in Mozambique. Somatotype was assessed using the Heath-Carter technique. Herit abilities were estimated using SAGE software. Moderate heritabilities were determined for each trait. Between 30 and 40% of the variation in each somatotype measure was attributable to genetic factors. The heritability of ectomorphy was 31%. Mesomorphy was similarly moderately heritable, with approximately 30% of the variationattributable to genetic factors. The heritability of endomorph was higher in the Calanga population (h(2) = 0.40). Quantitative genetic analyses of somatotype variation among siblings indicate that genetic factors significantly influence endomorphy, mesomorhpy, and ectomorphy. However, environmental factors also have significant effects on the variation in physique present in the population of Calanga. Lack of proper nutrition, housing, medical assistance, and primary health care, together with very demanding and sex-specific daily chores may contribute to the environmental effects on these traits.

  14. Potentially toxic elements contamination in urban soils: a comparison of three European cities.

    PubMed

    Biasioli, M; Grcman, H; Kralj, T; Madrid, F; Díaz-Barrientos, E; Ajmone-Marsan, F

    2007-01-01

    Studies on several cities around the world confirm that urban soils are subject to heavy anthropogenic disturbance. However, these surveys are difficult to compare due to a lack of common sampling and analytical protocols. In this study the soils of Ljubljana (Slovenia), Sevilla (Spain), and Torino (Italy) were extensively sampled and analyzed using common procedures. Results highlighted similarities across the cities, despite their differences in geography, size, climate, etc. Potentially toxic elements (PTE) showed a wide range in concentration reflecting a diffuse contamination. Among the "urban" elements Pb exceeded the legislation threshold in 45% of Ljubljana, 43% of Torino, and 11% of Sevilla samples while Zn was above the limits in 20, 43, and 2% of the soils of Ljubljana, Torino, and Sevilla, respectively. The distribution of PTE showed no depth-dependant changes, while general soil properties seemed more responsive to anthropogenic influences. Multivariate statistics revealed similar associations between PTE in the three cities, with Cu, Pb, and Zn in a group, and Ni and Cr in another, suggesting an anthropogenic origin for the former group and natural one for the latter. Chromium and Ni were unaffected by land use, except for roadside soils, while Cu, Pb, and Zn distribution appeared to be more dependent on the distance from emission sources. Regardless of the location, climate, and size, the "urban" factor--integrating type and intensity of contaminant emission and anthropogenic disturbance--seems to prevail in determining trends of PTE contamination.

  15. Comparative study on serum levels of macro and trace elements in schizophrenia based on supervised learning methods.

    PubMed

    Lin, Tong; Liu, Tiebing; Lin, Yucheng; Yan, Lailai; Chen, Zhongxue; Wang, Jingyu

    2017-09-01

    The etiology and pathophysiology of schizophrenia (SCZ) remain obscure. This study explored the associations between SCZ risk and serum levels of 39 macro and trace elements (MTE). A 1:1 matched case-control study was conducted among 114 schizophrenia patients and 114 healthy controls matched by age, sex and region. Blood samples were collected to determine the concentrations of 39 MTE by ICP-AES and ICP-MS. Both supervised learning methods and classical statistical testing were used to uncover the difference of MTE levels between cases and controls. The best prediction accuracies were 99.21% achieved by support vector machines in the original feature space (without dimensionality reduction), and 98.82% achieved by Naive Bayes with dimensionality reduction. More than half of MTE were found to be significantly different between SCZ patients and the controls. The presented investigation showed that there existed remarkable differences in concentrations of MTE between SCZ patients and healthy controls. The results of this study might be useful to diagnosis and prognosis of SCZ; they also indicated other promising applications in pharmacy and nutrition. However, the results should be interpreted with caution due to limited sample size and the lack of potential confounding factors, such as alcohol, smoking, body mass index (BMI), use of antipsychotics and dietary intakes. In the future the application of the analyses will be useful in designs that have larger sample sizes. Copyright © 2017 Elsevier GmbH. All rights reserved.

  16. Understanding the role of conscientiousness in healthy aging: where does the brain come in?

    PubMed

    Patrick, Christopher J

    2014-05-01

    In reviewing this impressive series of articles, I was struck by 2 points in particular: (a) the fact that the empirically oriented articles focused on analyses of data from very large samples, with the articles by Friedman, Kern, Hampson, and Duckworth (2014) and Kern, Hampson, Goldbert, and Friedman (2014) highlighting an approach to merging existing data sets through use of "metric bridges" to address key questions not addressable through 1 data set alone, and (b) the fact that the articles as a whole included limited mention of neuroscientific (i.e., brain research) concepts, methods, and findings. One likely reason for the lack of reference to brain-oriented work is the persisting gap between smaller sample size lab-experimental and larger sample size multivariate-correlational approaches to psychological research. As a strategy for addressing this gap and bringing a distinct neuroscientific component to the National Institute on Aging's conscientiousness and health initiative, I suggest that the metric bridging approach highlighted by Friedman and colleagues could be used to connect existing large-scale data sets containing both neurophysiological variables and measures of individual difference constructs to other data sets containing richer arrays of nonphysiological variables-including data from longitudinal or twin studies focusing on personality and health-related outcomes (e.g., Terman Life Cycle study and Hawaii longitudinal studies, as described in the article by Kern et al., 2014). (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  17. Variations in Physicochemical Properties of a Traditional Mercury-Based Nanopowder Formulation: Need for Standard Manufacturing Practices

    PubMed Central

    Kamath, S. U.; Pemiah, B.; Rajan, K. S.; Krishnaswamy, S.; Sethuraman, S.; Krishnan, U. M.

    2014-01-01

    Rasasindura is a mercury-based nanopowder synthesized using natural products through mechanothermal processing. It has been used in the Ayurvedic system of medicine since time immemorial for various therapeutic purposes such as rejuvenation, treatment of syphilis and in genital disorders. Rasasindura is said to be composed of mercury, sulphur and organic moieties derived from the decoction of plant extracts used during its synthesis. There is little scientific understanding of the preparation process so far. Though metallic mercury is incorporated deliberately for therapeutic purposes, it certainly raises toxicity concerns. The lack of gold standards in manufacturing of such drugs leads to a variation in the chemical composition of the final product. The objective of the present study was to assess the physicochemical properties of Rasasindura samples of different batches purchased from different manufacturers and assess the extent of deviation and gauge its impact on human health. Modern characterization techniques were employed to analyze particle size and morphology, surface area, zeta potential, elemental composition, crystallinity, thermal stability and degradation. Average particle size of the samples observed through scanning electron microscope ranged from 5-100 nm. Mercury content was found to be between 84 and 89% from elemental analysis. Despite batch-to-batch and manufacturer-to-manufacturer variations in the physicochemical properties, all the samples contained mercury in the form of HgS. These differences in the physicochemical properties may ultimately impact its biological outcome. PMID:25593382

  18. Characterization of the Roman curse tablet

    NASA Astrophysics Data System (ADS)

    Liu, Wen; Zhang, Boyang; Fu, Lin

    2017-08-01

    The Roman curse tablet, produced in ancient Rome period, is a metal plate that inscribed with curses. In this research, several techniques were used to find out the physical structure and chemical composition of the Roman curse tablet, and testified the hypothesis that whether the tablet is made of pure lead or lead alloy. A sample of Roman Curse Tablet from the Johns Hopkins Archaeological Museum was analyzed using several different characterization techniques to determine the physical structure and chemical composition. The characterization techniques used were including optical microscopy, scanning electron microscopy (SEM), atomic force microscopy (AFM), and differential scanning calorimetry (DSC). Because of the small sample size, X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS) and X-ray fluorescence (XRF) cannot test the sample. Results from optical microscopy and SEM, enlarged images of the sample surface were studied. The result revealed that the sample surface has a rough, non-uniform, and grainy surface. AFM provides three-dimensional topography of the sample surface, studying the sample surface in atomic level. DSC studies the thermal property, which is most likely a lead-alloy, not a pure lead. However, none of these tests indicated anything about the chemical composition. Future work will be required due to the lack of measures finding out its chemical composition. Therefore, from these characterization techniques above, the Roman curse tablet sample is consisted of lead alloy, not pure lead.

  19. Characterisation of the organic composition of size segregated atmospheric particulate matter at traffic exposed and background sites in Madrid

    NASA Astrophysics Data System (ADS)

    Mirante, F.; Perez, R.; Alves, C.; Revuelta, M.; Pio, C.; Artiñano, B.; Nunes, T.

    2010-05-01

    The growing awareness of the impact of atmospheric particulate matter (PM) on climate, and the incompletely recognised but serious effects of anthropogenic aerosols on air quality and human health, have led to diverse studies involving almost exclusively the coarse or the fine PM fractions. However, these environmental effects, the PM formation processes and the source assignment depend greatly on the particle size distribution. The innovative character of this study consists in obtaining time series with a size-segregated detailed chemical composition of PM for differently polluted sites. In this perspective, a summer sampling campaign was carried out from 1 of June to 1 of July 2009. One of the sampling sites was located at a representative urban monitoring station (Escuelas Aguirre) belonging to the municipal network, located at a heavy traffic street intersection in downtown Madrid. Other sampling point was positioned within the CIEMAT area, located in the NW corner of the city, which can be considered an urban background or suburban site. Particulate matter was sampled with high volume cascade impactors at 4 size stages: 10-2.5, 2.5-0.95, 0.95-0.45 and < 0.45 µm. Daily sampling was carried out on quartz fibre filters. Based on meteorological conditions and PM mass concentrations, each one of the 7 groups of filters collected during the first week were combined with the corresponding filters of the third week. The same procedure was undertaken with samples of the second and fourth weeks. Filters of 0.95-0.45 and < 0.45 µm were pooled to obtain the PM0.95 organic composition. The PM size-segregated samples were subjected to organic analysis by gas chromatography-mass spectrometry (GC-MS), after solvent extraction of filters and an appropriate derivatisation technique. Besides the homologous compound series of organic classes (e.g. n-alkanes, n-alkanols and n-alkanoic acids), special attention was given to the determination of specific molecular markers for different sources (e.g. vehicular). Carbon preference indices (CPI) close to the unity and the presence of PAHs point out vehicle exhaust as the main emission source of the aliphatic and polycyclic aromatic fractions, especially for the roadside aerosols. Concentration ratios between PAHs were also used to assign emission sources. The abundance and the sources of these carcinogenic pollutants are discussed and compared taking into account the local/regional characteristics. Water-soluble ions in PM were also analysed by ionic chromatography. A portion of the same filters was subjected to metal speciation by Inductively Coupled Plasma Mass Spectrometry (ICP-MS) or Instrumental Neutron Activation Analysis (INAA). Receptor-oriented modelling for source apportionment was applied to the size-segregated PM chemical composition data. Results of this work are expected to cover a lack of reliable information for the knowledge of the particle size-dependent constitution, sources and atmospheric formation processes in this area of the central Iberian Peninsula. Acknowledgements: F. Mirante thanks the Portuguese Science Foundation for financial support of the training period at CIEMAT, as well for the PhD grant SFRH/BD/45473/2008. M.A. Revuelta acknowledges the Ministry of Science and Innovation for their economical support through the FPI predoctoral grant BES-2008-007079.

  20. Development and validation of a knowledge test for health professionals regarding lifestyle modification.

    PubMed

    Talip, Whadi-ah; Steyn, Nelia P; Visser, Marianne; Charlton, Karen E; Temple, Norman

    2003-09-01

    We wanted to develop and validate a test that assesses the knowledge and practices of health professionals (HPs) with regard to the role of nutrition, physical activity, and smoking cessation (lifestyle modification) in chronic diseases of lifestyle. A descriptive cross-sectional validation study was carried out. The validation design consisted of two phases, namely 1) test planning and development and 2) test evaluation. The study sample consisted of five groups of HPs: dietitians, dietetic interns, general practitioners, medical students, and nurses. The overall response rate was 58%, resulting in a sample size of 186 participants. A test was designed to evaluate the knowledge and practices of HPs. The test was first evaluated by an expert group to ensure content, construct, and face validity. Thereafter, the questionnaire was tested on five groups of HPs to test for criterion validity. Internal consistency was evaluated by Cronbach's alpha. An expert panel ensured content, construct, and face validity of the test. Groups with the most training and exposure to nutrition (dietitians and dietetic interns) had the highest group mean score, ranging from 61% to 88%, whereas those with limited nutrition training (general practitioners, medical students, and nurses) had significantly lower scores, ranging from 26% to 80%. This result demonstrated criterion validity. Internal consistency of the overall test demonstrated a Cronbach's alpha of 0.99. Most HPs identified the mass media as their main source of information on lifestyle modification. These HPs also identified lack of time, lack of patient compliance, and lack of knowledge as barriers that prevent them from providing counseling on lifestyle modification. The results of this study showed that this test instrument identifies groups of health professionals with adequate training (knowledge) in lifestyle modification and those who require further training (knowledge).

  1. Latin American immigrants have limited access to health insurance in Japan: a cross sectional study

    PubMed Central

    2012-01-01

    Background Japan provides universal health insurance to all legal residents. Prior research has suggested that immigrants to Japan disproportionately lack health insurance coverage, but no prior study has used rigorous methodology to examine this issue among Latin American immigrants in Japan. The aim of our study, therefore, was to assess the pattern of health insurance coverage and predictors of uninsurance among documented Latin American immigrants in Japan. Methods We used a cross sectional, mixed method approach using a probability proportional to estimated size sampling procedure. Of 1052 eligible Latin American residents mapped through extensive fieldwork in selected clusters, 400 immigrant residents living in Nagahama City, Japan were randomly selected for our study. Data were collected through face-to-face interviews using a structured questionnaire developed from qualitative interviews. Results Our response rate was 70.5% (n = 282). Respondents were mainly from Brazil (69.9%), under 40 years of age (64.5%) and had lived in Japan for 9.45 years (SE 0.44; median, 8.00). We found a high prevalence of uninsurance (19.8%) among our sample compared with the estimated national average of 1.3% in the general population. Among the insured full time workers (n = 209), 55.5% were not covered by the Employee's Health Insurance. Many immigrants cited financial trade-offs as the main reasons for uninsurance. Lacking of knowledge that health insurance is mandatory in Japan, not having a chronic disease, and having one or no children were strong predictors of uninsurance. Conclusions Lack of health insurance for immigrants in Japan is a serious concern for this population as well as for the Japanese health care system. Appropriate measures should be taken to facilitate access to health insurance for this vulnerable population. PMID:22443284

  2. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  3. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  4. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  5. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. The relationship between offspring size and fitness: integrating theory and empiricism.

    PubMed

    Rollinson, Njal; Hutchings, Jeffrey A

    2013-02-01

    How parents divide the energy available for reproduction between size and number of offspring has a profound effect on parental reproductive success. Theory indicates that the relationship between offspring size and offspring fitness is of fundamental importance to the evolution of parental reproductive strategies: this relationship predicts the optimal division of resources between size and number of offspring, it describes the fitness consequences for parents that deviate from optimality, and its shape can predict the most viable type of investment strategy in a given environment (e.g., conservative vs. diversified bet-hedging). Many previous attempts to estimate this relationship and the corresponding value of optimal offspring size have been frustrated by a lack of integration between theory and empiricism. In the present study, we draw from C. Smith and S. Fretwell's classic model to explain how a sound estimate of the offspring size--fitness relationship can be derived with empirical data. We evaluate what measures of fitness can be used to model the offspring size--fitness curve and optimal size, as well as which statistical models should and should not be used to estimate offspring size--fitness relationships. To construct the fitness curve, we recommend that offspring fitness be measured as survival up to the age at which the instantaneous rate of offspring mortality becomes random with respect to initial investment. Parental fitness is then expressed in ecologically meaningful, theoretically defensible, and broadly comparable units: the number of offspring surviving to independence. Although logistic and asymptotic regression have been widely used to estimate offspring size-fitness relationships, the former provides relatively unreliable estimates of optimal size when offspring survival and sample sizes are low, and the latter is unreliable under all conditions. We recommend that the Weibull-1 model be used to estimate this curve because it provides modest improvements in prediction accuracy under experimentally relevant conditions.

  7. Genome-wide Association Study for Ovarian Cancer Susceptibility using Pooled DNA

    PubMed Central

    Lu, Yi; Chen, Xiaoqing; Beesley, Jonathan; Johnatty, Sharon E.; deFazio, Anna; Lambrechts, Sandrina; Lambrechts, Diether; Despierre, Evelyn; Vergotes, Ignace; Chang-Claude, Jenny; Hein, Rebecca; Nickels, Stefan; Wang-Gohrke, Shan; Dörk, Thilo; Dürst, Matthias; Antonenkova, Natalia; Bogdanova, Natalia; Goodman, Marc T.; Lurie, Galina; Wilkens, Lynne R.; Carney, Michael E.; Butzow, Ralf; Nevanlinna, Heli; Heikkinen, Tuomas; Leminen, Arto; Kiemeney, Lambertus A.; Massuger, Leon F.A.G.; van Altena, Anne M.; Aben, Katja K.; Kjaer, Susanne Krüger; Høgdall, Estrid; Jensen, Allan; Brooks-Wilson, Angela; Le, Nhu; Cook, Linda; Earp, Madalene; Kelemen, Linda; Easton, Douglas; Pharoah, Paul; Song, Honglin; Tyrer, Jonathan; Ramus, Susan; Menon, Usha; Gentry-Maharaj, Alexandra; Gayther, Simon A.; Bandera, Elisa V.; Olson, Sara H.; Orlow, Irene; Rodriguez-Rodriguez, Lorna

    2013-01-01

    Recent genome-wide association studies (GWAS) have identified four low-penetrance ovarian cancer susceptibility loci. We hypothesized that further moderate or low penetrance variants exist among the subset of SNPs not well tagged by the genotyping arrays used in the previous studies which would account for some of the remaining risk. We therefore conducted a time- and cost-effective stage 1 GWAS on 342 invasive serous cases and 643 controls genotyped on pooled DNA using the high density Illumina 1M-Duo array. We followed up 20 of the most significantly associated SNPs, which are not well tagged by the lower density arrays used by the published GWAS, and genotyping them on individual DNA. Most of the top 20 SNPs were clearly validated by individually genotyping the samples used in the pools. However, none of the 20 SNPs replicated when tested for association in a much larger stage 2 set of 4,651 cases and 6,966 controls from the Ovarian Cancer Association Consortium. Given that most of the top 20 SNPs from pooling were validated in the same samples by individual genotyping, the lack of replication is likely to be due to the relatively small sample size in our stage 1 GWAS rather than due to problems with the pooling approach. We conclude that there are unlikely to be any moderate or large effects on ovarian cancer risk untagged by the less dense arrays. However our study lacked power to make clear statements on the existence of hitherto untagged small effect variants. PMID:22794196

  8. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Sustainability and Small to Medium Sized Enterprises--How to Engage Them

    ERIC Educational Resources Information Center

    Condon, Linda

    2004-01-01

    Small and medium sized enterprises (SMEs) have a major advantage over larger organisations in regard to addressing sustainability issues--their size means they are able to react very quickly to changes in the business environment. They are disadvantaged, however, by lack of information on marketplace changes that make sustainability an opportunity…

  10. Size Determination of Aqueous C60 by Asymmetric Flow Field-Flow Fractionation (AF4) and in-Line Dynamic Light Scattering

    EPA Science Inventory

    To date, studies on the environmental behaviour of aggregated aqueous fullerene nanomaterials have used the entire size distribution of fullerene aggregates and do not distinguish between different aggregate size classes. This is a direct result of the lack of analytical methods ...

  11. Acoustic Enrichment of Extracellular Vesicles from Biological Fluids.

    PubMed

    Ku, Anson; Lim, Hooi Ching; Evander, Mikael; Lilja, Hans; Laurell, Thomas; Scheding, Stefan; Ceder, Yvonne

    2018-06-11

    Extracellular vesicles (EVs) have emerged as a rich source of biomarkers providing diagnostic and prognostic information in diseases such as cancer. Large-scale investigations into the contents of EVs in clinical cohorts are warranted, but a major obstacle is the lack of a rapid, reproducible, efficient, and low-cost methodology to enrich EVs. Here, we demonstrate the applicability of an automated acoustic-based technique to enrich EVs, termed acoustic trapping. Using this technology, we have successfully enriched EVs from cell culture conditioned media and urine and blood plasma from healthy volunteers. The acoustically trapped samples contained EVs ranging from exosomes to microvesicles in size and contained detectable levels of intravesicular microRNAs. Importantly, this method showed high reproducibility and yielded sufficient quantities of vesicles for downstream analysis. The enrichment could be obtained from a sample volume of 300 μL or less, an equivalent to 30 min of enrichment time, depending on the sensitivity of downstream analysis. Taken together, acoustic trapping provides a rapid, automated, low-volume compatible, and robust method to enrich EVs from biofluids. Thus, it may serve as a novel tool for EV enrichment from large number of samples in a clinical setting with minimum sample preparation.

  12. An Immunization Strategy for Hidden Populations.

    PubMed

    Chen, Saran; Lu, Xin

    2017-06-12

    Hidden populations, such as injecting drug users (IDUs), sex workers (SWs) and men who have sex with men (MSM), are considered at high risk of contracting and transmitting infectious diseases such as AIDS, gonorrhea, syphilis etc. However, public health interventions to such groups are prohibited due to strong privacy concerns and lack of global information, which is a necessity for traditional strategies such as targeted immunization and acquaintance immunization. In this study, we introduce an innovative intervention strategy to be used in combination with a sampling approach that is widely used for hidden populations, Respondent-driven Sampling (RDS). The RDS strategy is implemented in two steps: First, RDS is used to estimate the average degree (personal network size) and degree distribution of the target population with sample data. Second, a cut-off threshold is calculated and used to screen the respondents to be immunized. Simulations on model networks and real-world networks reveal that the efficiency of the RDS strategy is close to that of the targeted strategy. As the new strategy can be implemented with the RDS sampling process, it provides a cost-efficient and feasible approach for disease intervention and control for hidden populations.

  13. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  14. Dysfluent Handwriting in Schizophrenic Outpatients.

    PubMed

    Gawda, Barbara

    2016-04-01

    Taking into account findings in the literature, the author aimed to test whether specific graphical characteristics of handwriting can distinguish patients diagnosed with schizophrenic disorders from healthy controls. Handwriting samples (one sample from each person) from 60 outpatients (29 women, 31 men; age M = 28.5, SD = 5.4) with paranoid schizophrenia were analyzed by three documents examiners and were compared to samples from 60 controls (30 men, 30 women, age M = 28.0, SD = 3.0) without psychiatric disorders. Document examiners assessed 32 graphical features potentially related to schizophrenia. The comparisons between groups revealed that only 7 out of 32 handwriting properties were significantly different in the handwriting of schizophrenic outpatients from controls: the calligraphic forms of letters, loops in ovals, lacking of dots, tremor, sinusoidal baseline, and irregularities size of lower zone. These findings are discussed in terms of motor disturbances in schizophrenia and in relation to the previous research on handwriting of other mental disorders. Similarities between the graphical patterns of handwriting of schizophrenic patients and those of other mental disorders and/or other mental states have been demonstrated. © The Author(s) 2016.

  15. Iodine and mental development of children 5 years old and under: a systematic review and meta-analysis.

    PubMed

    Bougma, Karim; Aboud, Frances E; Harding, Kimberly B; Marquis, Grace S

    2013-04-22

    Several reviews and meta-analyses have examined the effects of iodine on mental development. None focused on young children, so they were incomplete in summarizing the effects on this important age group. The current systematic review therefore examined the relationship between iodine and mental development of children 5 years old and under. A systematic review of articles using Medline (1980-November 2011) was carried out. We organized studies according to four designs: (1) randomized controlled trial with iodine supplementation of mothers; (2) non-randomized trial with iodine supplementation of mothers and/or infants; (3) prospective cohort study stratified by pregnant women's iodine status; (4) prospective cohort study stratified by newborn iodine status. Average effect sizes for these four designs were 0.68 (2 RCT studies), 0.46 (8 non-RCT studies), 0.52 (9 cohort stratified by mothers' iodine status), and 0.54 (4 cohort stratified by infants' iodine status). This translates into 6.9 to 10.2 IQ points lower in iodine deficient children compared with iodine replete children. Thus, regardless of study design, iodine deficiency had a substantial impact on mental development. Methodological concerns included weak study designs, the omission of important confounders, small sample sizes, the lack of cluster analyses, and the lack of separate analyses of verbal and non-verbal subtests. Quantifying more precisely the contribution of iodine deficiency to delayed mental development in young children requires more well-designed randomized controlled trials, including ones on the role of iodized salt.

  16. DNA methylation markers for oral pre-cancer progression: A critical review.

    PubMed

    Shridhar, Krithiga; Walia, Gagandeep Kaur; Aggarwal, Aastha; Gulati, Smriti; Geetha, A V; Prabhakaran, Dorairaj; Dhillon, Preet K; Rajaraman, Preetha

    2016-02-01

    Although oral cancers are generally preceded by a well-established pre-cancerous stage, there is a lack of well-defined clinical and morphological criteria to detect and signal progression from pre-cancer to malignant tumours. We conducted a critical review to summarize the evidence regarding aberrant DNA methylation patterns as a potential diagnostic biomarker predicting progression. We identified all relevant human studies published in English prior to 30th April 2015 that examined DNA methylation (%) in oral pre-cancer by searching PubMed, Web-of-Science and Embase databases using combined key-searches. Twenty-one studies (18-cross-sectional; 3-longitudinal) were eligible for inclusion in the review, with sample sizes ranging from 4 to 156 affected cases. Eligible studies examined promoter region hyper-methylation of tumour suppressor genes in pathways including cell-cycle-control (n=15), DNA-repair (n=7), cell-cycle-signalling (n=4) and apoptosis (n=3). Hyper-methylated loci reported in three or more studies included p16, p14, MGMT and DAPK. Two longitudinal studies reported greater p16 hyper-methylation in pre-cancerous lesions transformed to malignancy compared to lesions that regressed (57-63.6% versus 8-32.1%; p<0.01). The one study that explored epigenome-wide methylation patterns reported three novel hyper-methylated loci (TRHDE; ZNF454; KCNAB3). The majority of reviewed studies were small, cross-sectional studies with poorly defined control groups and lacking validation. Whilst limitations in sample size and study design preclude definitive conclusions, current evidence suggests a potential utility of DNA methylation patterns as a diagnostic biomarker for oral pre-cancer progression. Robust studies such as large epigenome-wide methylation explorations of oral pre-cancer with longitudinal tracking are needed to validate the currently reported signals and identify new risk-loci and the biological pathways of disease progression. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. DNA methylation markers for oral pre-cancer progression: A critical review

    PubMed Central

    Shridhar, Krithiga; Walia, Gagandeep Kaur; Aggarwal, Aastha; Gulati, Smriti; Geetha, A.V.; Prabhakaran, Dorairaj; Dhillon, Preet K.; Rajaraman, Preetha

    2016-01-01

    Summary Although oral cancers are generally preceded by a well-established pre-cancerous stage, there is a lack of well-defined clinical and morphological criteria to detect and signal progression from pre-cancer to malignant tumours. We conducted a critical review to summarize the evidence regarding aberrant DNA methylation patterns as a potential diagnostic biomarker predicting progression. We identified all relevant human studies published in English prior to 30th April 2015 that examined DNA methylation (%) in oral pre-cancer by searching PubMed, Web-of-Science and Embase databases using combined key-searches. Twenty-one studies (18-cross-sectional; 3-longitudinal) were eligible for inclusion in the review, with sample sizes ranging from 4 to 156 affected cases. Eligible studies examined promoter region hyper-methylation of tumour suppressor genes in pathways including cell-cycle-control (n = 15), DNA-repair (n = 7), cell-cycle-signalling (n = 4) and apoptosis (n = 3). Hyper-methylated loci reported in three or more studies included p16, p14, MGMT and DAPK. Two longitudinal studies reported greater p16 hyper-methylation in pre-cancerous lesions transformed to malignancy compared to lesions that regressed (57–63.6% versus 8–32.1%; p < 0.01). The one study that explored epigenome-wide methylation patterns reported three novel hyper-methylated loci (TRHDE; ZNF454; KCNAB3). The majority of reviewed studies were small, cross-sectional studies with poorly defined control groups and lacking validation. Whilst limitations in sample size and study design preclude definitive conclusions, current evidence suggests a potential utility of DNA methylation patterns as a diagnostic biomarker for oral pre-cancer progression. Robust studies such as large epigenome-wide methylation explorations of oral pre-cancer with longitudinal tracking are needed to validate the currently reported signals and identify new risk-loci and the biological pathways of disease progression. PMID:26690652

  18. Sample size calculations for randomized clinical trials published in anesthesiology journals: a comparison of 2010 versus 2016.

    PubMed

    Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M

    2018-06-01

    Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.

  19. Microgravity ignition experiment

    NASA Technical Reports Server (NTRS)

    Motevalli, Vahid; Elliott, William; Garrant, Keith; Marcotte, Ryan

    1992-01-01

    The purpose of this project is to develop a flight-ready apparatus of the microgravity ignition experiment for the GASCAN 2 program. The microgravity ignition experiment is designed to study how a microgravity environment affects the time to ignition of a sample of alpha-cellulose paper. A microgravity environment will result in a decrease in the heat transferred from the sample due to a lack of convection currents, which would decrease time to ignition. A lack of convection current would also cause the oxygen supply at the sample not to be renewed, which could delay or even prevent ignition. When this experiment is conducted aboard GASCAN 2, the dominant result of the lack of ignition will be determined. The experiment consists of four canisters containing four thermocouples and a sensor to detect ignition of the paper sample. This year the interior of the canister was redesigned and a mathematical model of the heat transfer around the sample was developed. This heat transfer model predicts an ignition time of approximately 5.5 seconds if the decrease of heat loss from the sample is the dominant factor of the lack of convection currents.

  20. Autogeneous Friction Stir Weld Lack-of-Penetration Defect Detection and Sizing Using Directional Conductivity Measurements with MWM Eddy Current Sensor

    NASA Technical Reports Server (NTRS)

    Goldfine, Neil; Zilberstei, Vladimir; Lawson, Ablode; Kinchen, David; Arbegast, William

    2000-01-01

    Al 2195-T8 plate specimens containing Friction Stir Welds (FSW), provided by Lockheed Martin, were inspected using directional conductivity measurements with the MWM sensor. Sensitivity to lack-of-penetration (LOP) defect size has been demonstrated. The feature used to determine defect size was the normalized longitudinal component of the MWM conductivity measurements. This directional conductivity component was insensitive to the presence of a discrete crack. This permitted correlation of MWM conductivity measurements with the LOP defect size as changes in conductivity were apparently associated with metallurgical features within the first 0.020 in. of the LOP defect zone. Transverse directional conductivity measurements also provided an indication of the presence of discrete cracks. Continued efforts are focussed on inspection of a larger set of welded panels and further refinement of LOP characterization tools.

  1. Study of magnetic and electrical properties of nanocrystalline Mn doped NiO.

    PubMed

    Raja, S Philip; Venkateswaran, C

    2011-03-01

    Diluted Magnetic Semiconductors (DMS) are intensively explored in recent years for its applications in spintronics, which is expected to revolutionize the present day information technology. Nanocrystalline Mn doped NiO samples were prepared using chemical co-precipitation method with an aim to realize room temperature ferromagnetism. Phase formation of the samples was studied using X-ray diffraction-Rietveld analysis. Scanning electron microscopy and Energy dispersive X-ray analysis results reveal the nanocrystalline nature of the samples, agglomeration of the particles, considerable particle size distribution and the near stoichiometry. Thermomagnetic curves confirm the single-phase formation of the samples up to 1% doping of Mn. Vibrating Sample Magnetometer measurements indicate the absence of ferromagnetism at room temperature. This may be due to the low concentration of Mn2+ ions having weak indirect coupling with Ni2+ ions. The lack of free carriers is also expected to be the reason for the absence of ferromagnetism, which is in agreement with the results of resistivity measurements using impedance spectroscopy. Arrhenius plot shows the presence of two thermally activated regions and the activation energy for the nanocrystalline Mn doped sample was found to be greater than that of undoped NiO. This is attributed to the doping effect of Mn. However, the dielectric constant of the samples was found to be of the same order of magnitude very much comparable with that of undoped NiO.

  2. Development of a Novel Self-Enclosed Sample Preparation Device for DNA/RNA Isolation in Space

    NASA Technical Reports Server (NTRS)

    Zhang, Ye; Mehta, Satish K.; Pensinger, Stuart J.; Pickering, Karen D.

    2011-01-01

    Modern biology techniques present potentials for a wide range of molecular, cellular, and biochemistry applications in space, including detection of infectious pathogens and environmental contaminations, monitoring of drug-resistant microbial and dangerous mutations, identification of new phenotypes of microbial and new life species. However, one of the major technological blockades in enabling these technologies in space is a lack of devices for sample preparation in the space environment. To overcome such an obstacle, we constructed a prototype of a DNA/RNA isolation device based on our novel designs documented in the NASA New Technology Reporting System (MSC-24811-1/3-1). This device is self-enclosed and pipette free, purposely designed for use in the absence of gravity. Our design can also be modified easily for preparing samples in space for other applications, such as flowcytometry, immunostaining, cell separation, sample purification and separation according to its size and charges, sample chemical labeling, and sample purification. The prototype of our DNA/RNA isolation device was tested for efficiencies of DNA and RNA isolation from various cell types for PCR analysis. The purity and integrity of purified DNA and RNA were determined as well. Results showed that our developed DNA/RNA isolation device offers similar efficiency and quality in comparison to the samples prepared using the standard protocol in the laboratory.

  3. Use of a new jumbo forceps improves tissue acquisition of Barrett's esophagus surveillance biopsies.

    PubMed

    Komanduri, Sri; Swanson, Garth; Keefer, Laurie; Jakate, Shriram

    2009-12-01

    The major risk factors for the development of esophageal adenocarcinoma remain long-standing GERD and resultant Barrett's esophagus (BE). Finding the exact method of adequate tissue sampling for surveillance of dysplasia in BE remains a dilemma. We prospectively compared standard large-capacity biopsy forceps with a new jumbo biopsy forceps for dysplasia detection in BE. Prospective, single-center investigation. We prospectively enrolled 32 patients undergoing surveillance endoscopy for BE. Biopsy samples were obtained in paired fashion alternating between the experimental (jumbo) and control (large-capacity) forceps. Each sample was assessed for histopathology, specimen size, and adequacy. A total of 712 specimens were available for analysis for this investigation. Six patients were found to have dysplasia, and in 5 of those patients, the dysplasia was only detected with the jumbo forceps. The mean width was significantly greater in the Radial Jaw 4 jumbo group (3.3 mm vs 1.9 mm [P < .005]) as was the mean depth (2.0 mm vs 1.1 mm [P < .005]). Sixteen percent of samples obtained with the standard forceps provided an adequate sample, whereas the jumbo forceps provided an adequate sample 79% of the time (P < .05). A lack of a validated index for assessment of tissue adequacy in BE. The Radial Jaw 4 jumbo biopsy forceps significantly improves dysplasia detection and adequate tissue sampling in patients undergoing endoscopy for BE.

  4. Hospital ergonomics: a qualitative study to explore the organizational and cultural factors.

    PubMed

    Hignett, Sue

    2003-07-15

    The primary objective was to identify the characteristics of the health care industry with respect to organizational and cultural factors and consider how these might impact on the practice of ergonomics. Qualitative methodology was chosen as a suitable approach. This was supported by a middle ground philosophical position. Twenty-one interviews were carried out with academics and practitioners using a questionnaire proforma which developed iteratively over the 18 months of the project. A progressive four stage sampling strategy was used starting with purposive sampling to spread the net. Suggested contacts were then followed up (snowball sampling), before the third stage of intensity sampling to focus on participants with specific experience in hospital ergonomics. A final strategy of analysis sampling sought extreme and deviant cases to achieve theoretical saturation. The analysis resulted in three categories: organizational, staff and patient issues. The organizational issues included both the size and complexity of the National Health Service. For example, three hierarchical lines were identified in the management structure: an administrative line, a professional line and a patient-focused clinical management line. One of the surprising findings for the staff issues was the perceived lack of ergonomic information about female workers as a population group and traditional female employment sectors. The patient issues incorporated three dimensions associated with the caring role: the type of work; expectations; and possible outcomes. The work tends to be dirty and emotional, with a professional subculture to allow the handling of other peoples' bodies. This subculture was linked to a 'coping' attitude where staff put the patients' needs and well-being before their own. The change in patient expectations (from being apologetic through to demanding their rights) is mirrored in a changing model of care from paternalism to partnership. A lack of ergonomic research was identified for female workers in the health care industry relating to both the type of work and gender issues.

  5. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  6. High Resolution Size Analysis of Fetal DNA in the Urine of Pregnant Women by Paired-End Massively Parallel Sequencing

    PubMed Central

    Tsui, Nancy B. Y.; Jiang, Peiyong; Chow, Katherine C. K.; Su, Xiaoxi; Leung, Tak Y.; Sun, Hao; Chan, K. C. Allen; Chiu, Rossa W. K.; Lo, Y. M. Dennis

    2012-01-01

    Background Fetal DNA in maternal urine, if present, would be a valuable source of fetal genetic material for noninvasive prenatal diagnosis. However, the existence of fetal DNA in maternal urine has remained controversial. The issue is due to the lack of appropriate technology to robustly detect the potentially highly degraded fetal DNA in maternal urine. Methodology We have used massively parallel paired-end sequencing to investigate cell-free DNA molecules in maternal urine. Catheterized urine samples were collected from seven pregnant women during the third trimester of pregnancies. We detected fetal DNA by identifying sequenced reads that contained fetal-specific alleles of the single nucleotide polymorphisms. The sizes of individual urinary DNA fragments were deduced from the alignment positions of the paired reads. We measured the fractional fetal DNA concentration as well as the size distributions of fetal and maternal DNA in maternal urine. Principal Findings Cell-free fetal DNA was detected in five of the seven maternal urine samples, with the fractional fetal DNA concentrations ranged from 1.92% to 4.73%. Fetal DNA became undetectable in maternal urine after delivery. The total urinary cell-free DNA molecules were less intact when compared with plasma DNA. Urinary fetal DNA fragments were very short, and the most dominant fetal sequences were between 29 bp and 45 bp in length. Conclusions With the use of massively parallel sequencing, we have confirmed the existence of transrenal fetal DNA in maternal urine, and have shown that urinary fetal DNA was heavily degraded. PMID:23118982

  7. A Preliminary Comparison of Motor Learning Across Different Non-invasive Brain Stimulation Paradigms Shows No Consistent Modulations.

    PubMed

    Lopez-Alonso, Virginia; Liew, Sook-Lei; Fernández Del Olmo, Miguel; Cheeran, Binith; Sandrini, Marco; Abe, Mitsunari; Cohen, Leonardo G

    2018-01-01

    Non-invasive brain stimulation (NIBS) has been widely explored as a way to safely modulate brain activity and alter human performance for nearly three decades. Research using NIBS has grown exponentially within the last decade with promising results across a variety of clinical and healthy populations. However, recent work has shown high inter-individual variability and a lack of reproducibility of previous results. Here, we conducted a small preliminary study to explore the effects of three of the most commonly used excitatory NIBS paradigms over the primary motor cortex (M1) on motor learning (Sequential Visuomotor Isometric Pinch Force Tracking Task) and secondarily relate changes in motor learning to changes in cortical excitability (MEP amplitude and SICI). We compared anodal transcranial direct current stimulation (tDCS), paired associative stimulation (PAS 25 ), and intermittent theta burst stimulation (iTBS), along with a sham tDCS control condition. Stimulation was applied prior to motor learning. Participants ( n = 28) were randomized into one of the four groups and were trained on a skilled motor task. Motor learning was measured immediately after training (online), 1 day after training (consolidation), and 1 week after training (retention). We did not find consistent differential effects on motor learning or cortical excitability across groups. Within the boundaries of our small sample sizes, we then assessed effect sizes across the NIBS groups that could help power future studies. These results, which require replication with larger samples, are consistent with previous reports of small and variable effect sizes of these interventions on motor learning.

  8. Methylphenidate Has Superior Efficacy Over Parent-Child Interaction Therapy for Preschool Children with Disruptive Behaviors.

    PubMed

    van der Veen-Mulders, Lianne; van den Hoofdakker, Barbara J; Nauta, Maaike H; Emmelkamp, Paul; Hoekstra, Pieter J

    2018-02-01

    To compare the effectiveness between parent-child interaction therapy (PCIT) and methylphenidate in preschool children with attention-deficit/hyperactivity disorder (ADHD) symptoms and disruptive behaviors who had remaining significant behavior problems after previous behavioral parent training. We included 35 preschool children, ranging in age between 3.4 and 6.0 years. Participants were randomized to PCIT (n = 18) or methylphenidate (n = 17). Outcome measures were maternal ratings of the intensity and number of behavior problems and severity of ADHD symptoms. Changes from pretreatment to directly posttreatment were compared between groups using two-way mixed analysis of variance. We also made comparisons of both treatments to a nonrandomized care as usual (CAU) group (n = 17) regarding intensity and number of behavior problems. All children who started one of the treatments were included in the analyses. Mothers reported a significantly more decreased intensity of behavior problems after methylphenidate (pre-post effect size d = 1.50) compared with PCIT (d = 0.64). ADHD symptoms reduced significantly over time only after methylphenidate treatment (d = 0.48) and not after PCIT. Changes over time of children in the CAU treatment were nonsignificant. Although methylphenidate was more effective than PCIT, both interventions may be effective in the treatment of preschool children with disruptive behaviors. Our findings are preliminary as our sample size was small and the use of methylphenidate in preschool children lacks profound safety data as reflected by its off-label status. More empirical support is needed from studies with larger sample sizes.

  9. A Preliminary Comparison of Motor Learning Across Different Non-invasive Brain Stimulation Paradigms Shows No Consistent Modulations

    PubMed Central

    Lopez-Alonso, Virginia; Liew, Sook-Lei; Fernández del Olmo, Miguel; Cheeran, Binith; Sandrini, Marco; Abe, Mitsunari; Cohen, Leonardo G.

    2018-01-01

    Non-invasive brain stimulation (NIBS) has been widely explored as a way to safely modulate brain activity and alter human performance for nearly three decades. Research using NIBS has grown exponentially within the last decade with promising results across a variety of clinical and healthy populations. However, recent work has shown high inter-individual variability and a lack of reproducibility of previous results. Here, we conducted a small preliminary study to explore the effects of three of the most commonly used excitatory NIBS paradigms over the primary motor cortex (M1) on motor learning (Sequential Visuomotor Isometric Pinch Force Tracking Task) and secondarily relate changes in motor learning to changes in cortical excitability (MEP amplitude and SICI). We compared anodal transcranial direct current stimulation (tDCS), paired associative stimulation (PAS25), and intermittent theta burst stimulation (iTBS), along with a sham tDCS control condition. Stimulation was applied prior to motor learning. Participants (n = 28) were randomized into one of the four groups and were trained on a skilled motor task. Motor learning was measured immediately after training (online), 1 day after training (consolidation), and 1 week after training (retention). We did not find consistent differential effects on motor learning or cortical excitability across groups. Within the boundaries of our small sample sizes, we then assessed effect sizes across the NIBS groups that could help power future studies. These results, which require replication with larger samples, are consistent with previous reports of small and variable effect sizes of these interventions on motor learning. PMID:29740271

  10. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. [Reproduction of the pink shrimp Farfantepenaeus notialis (Decapoda: Penaeidae) in the Colombian Caribbean].

    PubMed

    Paramo, Jorge; Pérez, Daniel; Wolff, Matthias

    2014-06-01

    The shallow water pink shrimp (Farfantepenaeus notialis) is among the socioeconomically most important resources of the Caribbean. The lack of biological and fishery information is of great concern for the fisheries management authorities. The presented study therefore aimed at the investigation of the reproductive cycle, the size composition and the size at first maturity of this species as a basis for the ordination and management of this resource. The study was conducted from June 2012 to May 2013 off the coast of the Caribbean Sea of Colombia. A total of 5 356 individuals were collected, identified, classified and preserved for their subsequent analysis in the laboratory. Size, weight, sex and gonad stage were recorded for each specimen. Significant differences were found in sex ratio in all months sampled with a clear predominance of females. Mature females were found year-around, but two reproductive peaks were identified during the periods October-December and April-June. The mean catch total length size (MCS) for females and males was 148.00mm and 122.54mm, respectively. The mean size at maturity (LT50%) was 129.34mm for females and 97.77mm for males. MCS was always above LT50% for both sexes. Considering the large reduction in fishing effort in the Colombian Caribbean Sea over the last years, we could expect that the shrimp population is in a rebuilding process or perhaps it may be already restored.

  12. Survival of radio-implanted drymarchon couperi (Eastern Indigo Snake) in relation to body size and sex

    USGS Publications Warehouse

    Hyslop, N.L.; Meyers, J.M.; Cooper, R.J.; Norton, Terry M.

    2009-01-01

    Drymarchon couperi (eastern indigo snake) has experienced population declines across its range primarily as a result of extensive habitat loss, fragmentation, and degradation. Conservation efforts for D. couperi have been hindered, in part, because of informational gaps regarding the species, including a lack of data on population ecology and estimates of demographic parameters such as survival. We conducted a 2- year radiotelemetry study of D. couperi on Fort Stewart Military Reservation and adjacent private lands located in southeastern Georgia to assess individual characteristics associated with probability of survival. We used known-fate modeling to estimate survival, and an information-theoretic approach, based on a priori hypotheses, to examine intraspecific differences in survival probabilities relative to individual covariates (sex, size, size standardized by sex, and overwintering location). Annual survival in 2003 and 2004 was 0.89 (95% CI = 0.73-0.97, n = 25) and 0.72 (95% CI = 0.52-0.86; n = 27), respectively. Results indicated that body size, standardized by sex, was the most important covariate determining survival of adult D. couperi, suggesting lower survival for larger individuals within each sex. We are uncertain of the mechanisms underlying this result, but possibilities may include greater resource needs for larger individuals within each sex, necessitating larger or more frequent movements, or a population with older individuals. Our results may also have been influenced by analysis limitations because of sample size, other sources of individual variation, or environmental conditions. ?? 2009 by The Herpetologists' League, Inc.

  13. Enhancement of the recycling of waste Ni-Cd and Ni-MH batteries by mechanical treatment.

    PubMed

    Huang, Kui; Li, Jia; Xu, Zhenming

    2011-06-01

    A serious environmental problem was presented by waste batteries resulting from lack of relevant regulations and effective recycling technologies in China. The present work considered the enhancement of waste Ni-Cd and Ni-MH batteries recycling by mechanical treatment. In the process of characterization, two types of waste batteries (Ni-Cd and Ni-MH batteries) were selected and their components were characterized in relation to their elemental chemical compositions. In the process of mechanical separation and recycling, waste Ni-Cd and Ni-MH batteries were processed by a recycling technology without a negative impact on the environment. The technology contained mechanical crushing, size classification, gravity separation, and magnetic separation. The results obtained demonstrated that: (1) Mechanical crushing was an effective process to strip the metallic parts from separators and pastes. High liberation efficiency of the metallic parts from separators and pastes was attained in the crushing process until the fractions reached particle sizes smaller than 2mm. (2) The classified materials mainly consisted of the fractions with the size of particles between 0.5 and 2mm after size classification. (3) The metallic concentrates of the samples were improved from around 75% to 90% by gravity separation. More than 90% of the metallic materials were separated into heavy fractions when the particle sizes were larger than 0.5mm. (4) The size of particles between 0.5 and 2mm and the rotational speed of the separator between 30 and 60 rpm were suitable for magnetic separation during industrial application, with the recycling efficiency exceeding 95%. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Multiclass classification of microarray data samples with a reduced number of genes

    PubMed Central

    2011-01-01

    Background Multiclass classification of microarray data samples with a reduced number of genes is a rich and challenging problem in Bioinformatics research. The problem gets harder as the number of classes is increased. In addition, the performance of most classifiers is tightly linked to the effectiveness of mandatory gene selection methods. Critical to gene selection is the availability of estimates about the maximum number of genes that can be handled by any classification algorithm. Lack of such estimates may lead to either computationally demanding explorations of a search space with thousands of dimensions or classification models based on gene sets of unrestricted size. In the former case, unbiased but possibly overfitted classification models may arise. In the latter case, biased classification models unable to support statistically significant findings may be obtained. Results A novel bound on the maximum number of genes that can be handled by binary classifiers in binary mediated multiclass classification algorithms of microarray data samples is presented. The bound suggests that high-dimensional binary output domains might favor the existence of accurate and sparse binary mediated multiclass classifiers for microarray data samples. Conclusions A comprehensive experimental work shows that the bound is indeed useful to induce accurate and sparse multiclass classifiers for microarray data samples. PMID:21342522

  15. Prospective, randomized, blinded evaluation of donor semen quality provided by seven commercial sperm banks.

    PubMed

    Carrell, Douglas T; Cartmill, Deborah; Jones, Kirtly P; Hatasaka, Harry H; Peterson, C Matthew

    2002-07-01

    To evaluate variability in donor semen quality between seven commercial donor sperm banks, within sperm banks, and between intracervical insemination and intrauterine insemination. Prospective, randomized, blind evaluation of commercially available donor semen samples. An academic andrology laboratory. Seventy-five cryopreserved donor semen samples were evaluated. Samples were coded, then blindly evaluated for semen quality. Standard semen quality parameters, including concentration, motility parameters, World Health Organization criteria morphology, and strict criteria morphology. Significant differences were observed between donor semen banks for most semen quality parameters analyzed in intracervical insemination samples. In general, the greatest variability observed between banks was in percentage progressive sperm motility (range, 8.8 +/- 5.8 to 42.4 +/- 5.5) and normal sperm morphology (strict criteria; range, 10.1 +/- 3.3 to 26.6 +/- 4.7). Coefficients of variation within sperm banks were generally high. These data demonstrate the variability of donor semen quality provided by commercial sperm banks, both between banks and within a given bank. No relationship was observed between the size or type of sperm bank and the degree of variability. The data demonstrate the lack of uniformity in the criteria used to screen potential semen donors and emphasize the need for more stringent screening criteria and strict quality control in processing samples.

  16. Hemoglobin status of non-school going adolescent girls in three districts of Orissa, India.

    PubMed

    Bulliyy, Gandham; Mallick, Gitanjali; Sethy, Girija Sankar; Kar, Santanu Kumar

    2007-01-01

    Anemia is a major public health problem in young children and pregnant women in SouthEast Asia, but a paucity of data on anemia in adolescent girls in India. Studies are lacking on the entire non-school going adolescent population. To determine the prevalence of anemia in non-school going adolescent girls and the association between hemoglobin (Hb) concentration and socioeconomic and nutritional factors. A cross-sectional community study conducted on a sample of 1937 healthy adolescent girls aged 11-19 years from three districts of Orissa, India. Sample size was determined using a probability proportionate to size cluster sampling. The adolescent girls were interviewed and anthropometric measurements were collected. The Hb estimation was carried out in capillary blood samples using the cyanmethemoglobin method. Anemia and nutritional status were evaluated according to standard procedures. The mean Hb concentration was 9.7 +/- 1.4 g/dL (range, 4.5-13.4 g/dL). Of the total adolescent girls, 1869 (96.5%) were anemic (Hb < 12.0 g/dL), of which, 45.2%, 46.9% and 4.4% had mild, moderate, and severe anemia, respectively. A significant curvilinear relation was found between Hb concentration and age, with the nadir of the curve occurring in the 12-14 years age group. Girls from Bargarh district had significantly lower mean Hb levels than those from the Jajpur and Khurda districts. Significant positive associations were found between Hb concentration and pre-menarche, community, education levels of girls and their parents' family income, body mass index, and mid-upper arm circumference. This study revealed that prevalence of anemia was extremely high in non-school going adolescent girls (most were moderately anemic) and stressed the need for more research and public health interventions.

  17. Floodplain complexity and surface metrics: influences of scale and geomorphology

    USGS Publications Warehouse

    Scown, Murray W.; Thoms, Martin C.; DeJager, Nathan R.

    2015-01-01

    Many studies of fluvial geomorphology and landscape ecology examine a single river or landscape, thus lack generality, making it difficult to develop a general understanding of the linkages between landscape patterns and larger-scale driving variables. We examined the spatial complexity of eight floodplain surfaces in widely different geographic settings and determined how patterns measured at different scales relate to different environmental drivers. Floodplain surface complexity is defined as having highly variable surface conditions that are also highly organised in space. These two components of floodplain surface complexity were measured across multiple sampling scales from LiDAR-derived DEMs. The surface character and variability of each floodplain were measured using four surface metrics; namely, standard deviation, skewness, coefficient of variation, and standard deviation of curvature from a series of moving window analyses ranging from 50 to 1000 m in radius. The spatial organisation of each floodplain surface was measured using spatial correlograms of the four surface metrics. Surface character, variability, and spatial organisation differed among the eight floodplains; and random, fragmented, highly patchy, and simple gradient spatial patterns were exhibited, depending upon the metric and window size. Differences in surface character and variability among the floodplains became statistically stronger with increasing sampling scale (window size), as did their associations with environmental variables. Sediment yield was consistently associated with differences in surface character and variability, as were flow discharge and variability at smaller sampling scales. Floodplain width was associated with differences in the spatial organization of surface conditions at smaller sampling scales, while valley slope was weakly associated with differences in spatial organisation at larger scales. A comparison of floodplain landscape patterns measured at different scales would improve our understanding of the role that different environmental variables play at different scales and in different geomorphic settings.

  18. Evaluation of hygiene practices and microbiological quality of cooked meat products during slicing and handling at retail.

    PubMed

    Pérez-Rodríguez, F; Castro, R; Posada-Izquierdo, G D; Valero, A; Carrasco, E; García-Gimeno, R M; Zurera, G

    2010-10-01

    Cooked meat ready-to-eat products are recognized to be contaminated during slicing which, in the last years, has been associated with several outbreaks. This work aimed to find out possible relation between the hygiene practice taking place at retail point during slicing of cooked meat products in small and medium-sized establishments (SMEs) and large-sized establishments (LEs) and the microbiological quality of sliced cooked meat products. For that, a checklist was drawn up and filled in based on scoring handling practice during slicing in different establishments in Cordoba (Southern Spain). In addition, sliced cooked meats were analyzed for different microbiological indicators and investigated for the presence of Listeria spp. and Listeria monocytogenes. Results indicated that SMEs showed a more deficient handling practices compared to LEs. In spite of these differences, microbiological counts indicated similar microbiological quality in cooked meat samples for both types of establishments. On the other hand, Listeria monocytogenes and Listeria inocua were isolated from 7.35% (5/68) and 8.82% (6/68) of analyzed samples, respectively. Positive samples for Listeria spp. were found in establishments which showed acceptable hygiene levels, though contamination could be associated to the lack of exclusiveness of slicers at retail points. Moreover, Listeria spp presence could not be statistically linked to any microbiological parameters; however, it was observed that seasonality influenced significantly (P<0.05) L. monocytogenes presence, being all samples found during warm season (5/5). As a conclusion, results suggested that more effort should be made to adequately educate handlers in food hygiene practices, focused specially on SMEs. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  19. Aged boreal biomass-burning aerosol size distributions from BORTAS 2011

    NASA Astrophysics Data System (ADS)

    Sakamoto, K. M.; Allan, J. D.; Coe, H.; Taylor, J. W.; Duck, T. J.; Pierce, J. R.

    2015-02-01

    Biomass-burning aerosols contribute to aerosol radiative forcing on the climate system. The magnitude of this effect is partially determined by aerosol size distributions, which are functions of source fire characteristics (e.g. fuel type, MCE) and in-plume microphysical processing. The uncertainties in biomass-burning emission number-size distributions in climate model inventories lead to uncertainties in the CCN (cloud condensation nuclei) concentrations and forcing estimates derived from these models. The BORTAS-B (Quantifying the impact of BOReal forest fires on Tropospheric oxidants over the Atlantic using Aircraft and Satellite) measurement campaign was designed to sample boreal biomass-burning outflow over eastern Canada in the summer of 2011. Using these BORTAS-B data, we implement plume criteria to isolate the characteristic size distribution of aged biomass-burning emissions (aged ~ 1-2 days) from boreal wildfires in northwestern Ontario. The composite median size distribution yields a single dominant accumulation mode with Dpm = 230 nm (number-median diameter) and σ = 1.5, which are comparable to literature values of other aged plumes of a similar type. The organic aerosol enhancement ratios (ΔOA / ΔCO) along the path of Flight b622 show values of 0.09-0.17 μg m-3 ppbv-1 (parts per billion by volume) with no significant trend with distance from the source. This lack of enhancement ratio increase/decrease with distance suggests no detectable net OA (organic aerosol) production/evaporation within the aged plume over the sampling period (plume age: 1-2 days), though it does not preclude OA production/loss at earlier stages. A Lagrangian microphysical model was used to determine an estimate of the freshly emitted size distribution corresponding to the BORTAS-B aged size distributions. The model was restricted to coagulation and dilution processes based on the insignificant net OA production/evaporation derived from the ΔOA / ΔCO enhancement ratios. We estimate that the young-plume median diameter was in the range of 59-94 nm with modal widths in the range of 1.7-2.8 (the ranges are due to uncertainty in the entrainment rate). Thus, the size of the freshly emitted particles is relatively unconstrained due to the uncertainties in the plume dilution rates.

  20. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  1. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  2. Effect Size in Efficacy Trials of Women With Decreased Sexual Desire.

    PubMed

    Pyke, Robert E; Clayton, Anita H

    2018-03-22

    Regarding hypoactive sexual desire disorder (HSDD) in women, some reviewers judge the effect size small for medications vs placebo, but substantial for cognitive behavior therapy (CBT) or mindfulness meditation training (MMT) vs wait list. However, we lack comparisons of the effect sizes for the active intervention itself, for the control treatment, and for the differential between the two. For efficacy trials of HSDD in women, compare effect sizes for medications (testosterone/testosterone transdermal system, flibanserin, and bremelanotide) and placebo vs effect sizes for psychotherapy and wait-list control. We conducted a literature search for mean changes and SD on main measures of sexual desire and associated distress in trials of medications, CBT, or MMT. Effect size was used as it measures the magnitude of the intervention without confounding by sample size. Cohen d was used to determine effect sizes. For medications, mean (SD) effect size was 1.0 (0.34); for CBT and MMT, 1.0 (0.36); for placebo, 0.55 (0.16); and for wait list, 0.05 (0.26). Recommendations of psychotherapy over medication for treatment of HSDD are premature and not supported by data on effect sizes. Active participation in treatment conveys considerable non-specific benefits. Caregivers should attend to biological and psychosocial elements, and patient preference, to optimize response. Few clinical trials of psychotherapies were substantial in size or utilized adequate control paradigms. Medications and psychotherapies had similar, large effect sizes. Effect size of placebo was moderate. Effect size of wait-list control was very small, about one quarter that of placebo. Thus, a substantial non-specific therapeutic effect is associated with receiving placebo plus active care and evaluation. The difference in effect size between placebo and wait-list controls distorts the value of the subtraction of effect of the control paradigms to estimate intervention effectiveness. Pyke RE, Clayton AH. Effect Size in Efficacy Trials of Women With Decreased Sexual Desire. Sex Med Rev 2018;XX:XXX-XXX. Copyright © 2018 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.

  3. Use of alligator hole abundance and occupancy rate as indicators for restoration of a human-altered wetland

    USGS Publications Warehouse

    Fujisaki, Ikuko; Mazzotti, Frank J.; Hart, Kristen M.; Rice, Kenneth G.; Ogurcak, Danielle; Rochford, Michael; Jeffery, Brian M.; Brandt, Laura A.; Cherkiss, Michael S.

    2012-01-01

    Use of indicator species as a measure of ecosystem conditions is an established science application in environmental management. Because of its role in shaping wetland systems, the American alligator (Alligator mississippiensis) is one of the ecological indicators for wetland restoration in south Florida, USA. We conducted landscape-level aerial surveys of alligator holes in two different habitats in a wetland where anthropogenic modification of surface hydrology has altered the natural system. Alligator holes were scarcer in an area where modified hydrology caused draining and frequent dry-downs compared to another area that maintains a functional wetland system. Lower abundance of alligator holes indicates lack of alligator activities, lower overall species diversity, and lack of dry-season aquatic refugia for other organisms. The occupancy rate of alligator holes was lower than the current restoration target for the Everglades, and was variable by size class with large size-class alligators predominantly occupying alligator holes. This may indicate unequal size-class distribution, different habitat selection by size classes, or possibly a lack of recruitment. Our study provides pre-restoration baseline information about one indicator species for the Everglades. Success of the restoration can be assessed via effective synthesis of information derived by collective research efforts on the entire suite of selected ecological indicators.

  4. Pulverized Tejon Lookout Granite: Attempts at Placing Constraints on the Processes

    NASA Astrophysics Data System (ADS)

    Sisk, M.; Dor, O.; Rockwell, T.; Girty, G.; Ben-Zion, Y.

    2007-12-01

    We have described and analyzed pulverized Tejon Lookout granite recovered from several transects of the western segment of the Garlock fault on Tejon Ranch in southern California. Observations and data collected at this location are compared to a sampled transect of the San Andreas fault at Tejon Pass previously studied by Wilson et al. (2005), also exposing the Tejon Lookout granite. The purpose of this study is to characterize the physical and chemical properties of the pervasively pulverized leucocratic rocks at multiple locations and to hopefully place constraints on the processes producing them. To accomplish this we performed particle size analysis with the use of both laser particle analyzer and pipette methodology; major and trace chemistry analyses determined by XRF; clay mineralogy determined by XRD; and we evaluated fabric and texture through the study of thin sections. Recovered samples met the field criteria of pulverization developed by Dor et al., 2006 - that is, the individual 1-2 mm-sized crystals can be recognized in the field but the granite (including quartz and feldspar) can be mashed with ones fingers and exhibits the texture of toothpaste. All samples were analyzed on a Horiba LA930 Laser Particle Analyzer in an attempt to reproduce the earlier results of Wilson et al. (2005) with similar methodology. We also utilized the classic pipette methodology to ensure complete discrimination of particle sizes. Our PSD analysis shows that the dominant particle size falls in the 31-125 micron range, much coarser than previously reported by Wilson et al. (2005), with >90% of the total sample falling in the >31 micron size range. We can reproduce the previously documented results by allowing the samples to circulate for long periods of time at slow circulation speeds in the laser particle size analyzer, during which time the coarse fraction settles out, thereby leaving only the fine fraction for detection. However, subsequent increase in the circulation speed leads to a complete recovery of the original PSD. Our XRF and XRD analyses provide evidence of the lack of major weathering products and their inability to skew the PSD results in a significant way. Dor et al. (2007) and Stillings et al. (2007) document evidence that support theoretical predictions and previous inferences of pulverization occurring in the upper few kilometers, especially along faults of the southern San Andreas system. Geophysical observations of Lewis et al. (2005, 2007) provide evidence that low velocity fault- parallel layers, which are likely made of pulverized or highly damaged material, are dominant in the upper few kilometers of the crust. Their asymmetric position with respect to the slipping zone, in agreement with asymmetric patterns of small scale mapped rock damage (Dor et al., 2006), suggest that pulverized rocks are likely the product of a preferred rupture direction during dynamic slip. Our results combined with the above mentioned works imply that pulverized fault zone rocks at multiple locations are much less damaged than suggested in previous studies.

  5. Method development and validation for measuring the particle size distribution of pentaerythritol tetranitrate (PETN) powders.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, Sharissa Gay

    2005-09-01

    Currently, the critical particle properties of pentaerythritol tetranitrate (PETN) that influence deflagration-to-detonation time in exploding bridge wire detonators (EBW) are not known in sufficient detail to allow development of a predictive failure model. The specific surface area (SSA) of many PETN powders has been measured using both permeametry and gas absorption methods and has been found to have a critical effect on EBW detonator performance. The permeametry measure of SSA is a function of particle shape, packed bed pore geometry, and particle size distribution (PSD). Yet there is a general lack of agreement in PSD measurements between laboratories, raising concernsmore » regarding collaboration and complicating efforts to understand changes in EBW performance related to powder properties. Benchmarking of data between laboratories that routinely perform detailed PSD characterization of powder samples and the determination of the most appropriate method to measure each PETN powder are necessary to discern correlations between performance and powder properties and to collaborate with partnering laboratories. To this end, a comparison was made of the PSD measured by three laboratories using their own standard procedures for light scattering instruments. Three PETN powder samples with different surface areas and particle morphologies were characterized. Differences in bulk PSD data generated by each laboratory were found to result from variations in sonication of the samples during preparation. The effect of this sonication was found to depend on particle morphology of the PETN samples, being deleterious to some PETN samples and advantageous for others in moderation. Discrepancies in the submicron-sized particle characterization data were related to an instrument-specific artifact particular to one laboratory. The type of carrier fluid used by each laboratory to suspend the PETN particles for the light scattering measurement had no consistent effect on the resulting PSD data. Finally, the SSA of the three powders was measured using both permeametry and gas absorption methods, enabling the PSD to be linked to the SSA for these PETN powders. Consistent characterization of other PETN powders can be performed using the appropriate sample-specific preparation method, so that future studies can accurately identify the effect of changes in the PSD on the SSA and ultimately model EBW performance.« less

  6. Lectin-Magnetic Separation (LMS) for isolation of Toxoplasma gondii oocysts from concentrated water samples prior to detection by microscopy or qPCR

    USDA-ARS?s Scientific Manuscript database

    Although standard methods for analyzing water samples for the protozoan parasites Cryptosporidium spp. and Giardia duodenalis are available and widely used, equivalent methods for analyzing water samples for Toxoplasma oocysts are lacking. This is partly due to the lack of a readily available, relia...

  7. Computation of Effect Size for Moderating Effects of Categorical Variables in Multiple Regression

    ERIC Educational Resources Information Center

    Aguinis, Herman; Pierce, Charles A.

    2006-01-01

    The computation and reporting of effect size estimates is becoming the norm in many journals in psychology and related disciplines. Despite the increased importance of effect sizes, researchers may not report them or may report inaccurate values because of a lack of appropriate computational tools. For instance, Pierce, Block, and Aguinis (2004)…

  8. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  9. Effects of field plot size on prediction accuracy of aboveground biomass in airborne laser scanning-assisted inventories in tropical rain forests of Tanzania.

    PubMed

    Mauya, Ernest William; Hansen, Endre Hofstad; Gobakken, Terje; Bollandsås, Ole Martin; Malimbwi, Rogers Ernest; Næsset, Erik

    2015-12-01

    Airborne laser scanning (ALS) has recently emerged as a promising tool to acquire auxiliary information for improving aboveground biomass (AGB) estimation in sample-based forest inventories. Under design-based and model-assisted inferential frameworks, the estimation relies on a model that relates the auxiliary ALS metrics to AGB estimated on ground plots. The size of the field plots has been identified as one source of model uncertainty because of the so-called boundary effects which increases with decreasing plot size. Recent research in tropical forests has aimed to quantify the boundary effects on model prediction accuracy, but evidence of the consequences for the final AGB estimates is lacking. In this study we analyzed the effect of field plot size on model prediction accuracy and its implication when used in a model-assisted inferential framework. The results showed that the prediction accuracy of the model improved as the plot size increased. The adjusted R 2 increased from 0.35 to 0.74 while the relative root mean square error decreased from 63.6 to 29.2%. Indicators of boundary effects were identified and confirmed to have significant effects on the model residuals. Variance estimates of model-assisted mean AGB relative to corresponding variance estimates of pure field-based AGB, decreased with increasing plot size in the range from 200 to 3000 m 2 . The variance ratio of field-based estimates relative to model-assisted variance ranged from 1.7 to 7.7. This study showed that the relative improvement in precision of AGB estimation when increasing field-plot size, was greater for an ALS-assisted inventory compared to that of a pure field-based inventory.

  10. Molecular Subtypes of Indonesian Breast Carcinomas - Lack of Association with Patient Age and Tumor Size

    PubMed Central

    Rahmawati, Yeni; Setyawati, Yunita; Widodo, Irianiwati; Ghozali, Ahmad; Purnomosari, Dewajani

    2018-01-01

    Objective: Breast carcinoma (BC) is a heterogeneous disease that exhibits variation in biological behaviour, prognosis and response to therapy. Molecular classification is generally into Luminal A, Luminal B, HER2+ and triple negative/basal-like, depending on receptor characteristics. Clinical factors that determined the BC prognosis are age and tumor size. Since information on molecular subtypes of Indonesian BCs is limited, the present study was conducted, with attention to subtypes in relation to age and tumor size. Methods: A retrospective cross-sectional study of 247 paraffin-embedded samples of invasive BC from Dr. Sardjito General Hospital Yogyakarta in the year 2012- 2015 was performed. Immunohistochemical staining using anti- ER, PR, HER2, Ki-67 and CK 5/6 antibodies was applied to classify molecular subtypes. Associations with age and tumor size were analyzed using the Chi Square Test. Results: The Luminal A was the most common subtype of Indonesian BC (41.3%), followed by triple negative (25.5%), HER2 (19.4%) and luminal B (13.8%). Among the triple negative lesions, the basal-like subtype was more frequent than the non basal-like (58.8 % vs 41.2%). Luminal B accounted for the highest percentage of younger age cases (< 40 years old) while HER2+ was most common in older age (> 50 years old) patients. Triple negative/basal-like were commonly large in size. Age (p = 0.080) and tumor size (p = 0.462) were not significantly associated with molecular subtypes of BC. Conclusion: The most common molecular subtype of Indonesian BC is luminal A, followed by triple-negative, HER2+ and luminal B. The majority of triple-negative lesions are basal-like. There are no association between age and tumor size with molecular subtypes of Indonesian BCs. PMID:29373908

  11. Molecular Subtypes of Indonesian Breast Carcinomas - Lack of Association with Patient Age and Tumor Size

    PubMed

    Rahmawati, Yeni; Setyawati, Yunita; Widodo, Irianiwati; Ghozali, Ahmad; Purnomosari, Dewajani

    2018-01-27

    Objective: Breast carcinoma (BC) is a heterogeneous disease that exhibits variation in biological behaviour, prognosis and response to therapy. Molecular classification is generally into Luminal A, Luminal B, HER2+ and triple negative/basal-like, depending on receptor characteristics. Clinical factors that determined the BC prognosis are age and tumor size. Since information on molecular subtypes of Indonesian BCs is limited, the present study was conducted, with attention to subtypes in relation to age and tumor size. Methods: A retrospective cross-sectional study of 247 paraffin-embedded samples of invasive BC from Dr. Sardjito General Hospital Yogyakarta in the year 2012- 2015 was performed. Immunohistochemical staining using anti- ER, PR, HER2, Ki-67 and CK 5/6 antibodies was applied to classify molecular subtypes. Associations with age and tumor size were analyzed using the Chi Square Test. Results: The Luminal A was the most common subtype of Indonesian BC (41.3%), followed by triple negative (25.5%), HER2 (19.4%) and luminal B (13.8%). Among the triple negative lesions, the basal-like subtype was more frequent than the non basal-like (58.8 % vs 41.2%). Luminal B accounted for the highest percentage of younger age cases (< 40 years old) while HER2+ was most common in older age (> 50 years old) patients. Triple negative/basal-like were commonly large in size. Age (p = 0.080) and tumor size (p = 0.462) were not significantly associated with molecular subtypes of BC. Conclusion: The most common molecular subtype of Indonesian BC is luminal A, followed by triple-negative, HER2+ and luminal B. The majority of triple-negative lesions are basal-like. There are no association between age and tumor size with molecular subtypes of Indonesian BCs. Creative Commons Attribution License

  12. "TNOs are Cool": A survey of the trans-Neptunian region. XIII. Statistical analysis of multiple trans-Neptunian objects observed with Herschel Space Observatory

    NASA Astrophysics Data System (ADS)

    Kovalenko, I. D.; Doressoundiram, A.; Lellouch, E.; Vilenius, E.; Müller, T.; Stansberry, J.

    2017-11-01

    Context. Gravitationally bound multiple systems provide an opportunity to estimate the mean bulk density of the objects, whereas this characteristic is not available for single objects. Being a primitive population of the outer solar system, binary and multiple trans-Neptunian objects (TNOs) provide unique information about bulk density and internal structure, improving our understanding of their formation and evolution. Aims: The goal of this work is to analyse parameters of multiple trans-Neptunian systems, observed with Herschel and Spitzer space telescopes. Particularly, statistical analysis is done for radiometric size and geometric albedo, obtained from photometric observations, and for estimated bulk density. Methods: We use Monte Carlo simulation to estimate the real size distribution of TNOs. For this purpose, we expand the dataset of diameters by adopting the Minor Planet Center database list with available values of the absolute magnitude therein, and the albedo distribution derived from Herschel radiometric measurements. We use the 2-sample Anderson-Darling non-parametric statistical method for testing whether two samples of diameters, for binary and single TNOs, come from the same distribution. Additionally, we use the Spearman's coefficient as a measure of rank correlations between parameters. Uncertainties of estimated parameters together with lack of data are taken into account. Conclusions about correlations between parameters are based on statistical hypothesis testing. Results: We have found that the difference in size distributions of multiple and single TNOs is biased by small objects. The test on correlations between parameters shows that the effective diameter of binary TNOs strongly correlates with heliocentric orbital inclination and with magnitude difference between components of binary system. The correlation between diameter and magnitude difference implies that small and large binaries are formed by different mechanisms. Furthermore, the statistical test indicates, although not significant with the sample size, that a moderately strong correlation exists between diameter and bulk density. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  13. Development, primacy, and systems of cities.

    PubMed

    El-shakhs, S

    1972-10-01

    The relationship between the evolutionary changes in the city size distribution of nationally defined urban systems and the process of socioeconomic development is examined. Attention is directed to the problems of defining and measuring changes in city size distributions, using the results to test empirically the relationship of such changes to the development process. Existing theoretical structures and empirical generalizations which have tried to explain or to describe, respectively, the hierarchical relationships of cities are represented by central place theory and rank size relationships. The problem is not that deviations exist but that an adequate definition is lacking of urban systems on the 1 hand, and a universal measure of city size distribution, which could be applied to any system irrespective of its level of development, on the other. The problem of measuring changes in city size distributions is further compounded by the lack of sufficient reliable information about different systems of cities for the purposes of empirical comparative analysis. Changes in city size distributions have thus far been viewed largely within the framework of classic equilibrium theory. A more differentiated continuum of the development process should replace the bioplar continuum of underdeveloped developed countries in relating changes in city size distribution with development. Implicit in this distinction is the view that processes which influence spatial organization during the early formative stages of development are inherently different from those operating during the more advanced stages. 2 approaches were used to examine the relationship between national levels of development and primacy: a comparative analysis of a large number of countries at a given point in time; and a historical analysis of a limited sample of 2 advanced countries, the US and Great Britain. The 75 countries included in this study cover a wide range of characteristics. The study found a significant association between the degree of primacy of distributions of cities and their socioeconomic level of development; and the form of the primacy curve (or its evolution with development) seemed to follow a consistent pattern in which the peak of primacy obtained during the stages of socioeconomic transition with countries being less primate in either direction from that peak. This pattern is the result of 2 reverse influences of the development process on the spatial structure of countries--centralization and concentration beginning with the rise of cities and a decentralization and spread effect accompanying the increasing influence and importance of the periphery and structural changes in the pattern of authority.

  14. Concentrations of selected constituents in surface-water and streambed-sediment samples collected from streams in and near an area of oil and natural-gas development, south-central Texas, 2011-13

    USGS Publications Warehouse

    Opsahl, Stephen P.; Crow, Cassi L.

    2014-01-01

    During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.

  15. HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE

    NASA Technical Reports Server (NTRS)

    De, Salvo L. J.

    1994-01-01

    HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.

  16. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  17. Analysis of Sample Size, Counting Time, and Plot Size from an Avian Point Count Survey on Hoosier National Forest, Indiana

    Treesearch

    Frank R. Thompson; Monica J. Schwalbach

    1995-01-01

    We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...

  18. Pre-menopausal triple-negative breast cancer at HAM hospital medan

    NASA Astrophysics Data System (ADS)

    Betty; Laksmi, L. I.; Siregar, K. B.

    2018-03-01

    Triple-negative breast cancers (TNBC) are a type of breast cancer that does not have any or lack expression of the three receptors of estrogen (ER), progesterone (PR), and human epidermal growth factor receptor 2 (HER-2). This cross-sectional study was performed on patients TNBC in HAM hospital Medan from 2013 to 2016 by immunohistochemistry stained. A total 60 invasive breast cancer samples with TNBC. The more frequent in TNBC group were 51-60 years (19 cases, 31.66%) and pre-menopause (34 cases, 57%). Tumor size T3 and T4 with staging IIIA and IIIB, histology sub-type IC-NOS and ILC with grade 2 and grade 3 of histologic was more common in TNBC.

  19. Four hygienic-dietary recommendations as add-on treatment in depression: a randomized-controlled trial.

    PubMed

    García-Toro, Mauro; Ibarra, Olga; Gili, Margalida; Serrano, María J; Oliván, Bárbara; Vicens, Enric; Roca, Miguel

    2012-10-01

    Modifying diet, exercise, sunlight exposure and sleep patterns may be useful in the treatment of depression. Eighty nonseasonal depressive outpatients on anti-depressant treatment were randomly assigned either to the active or control group. Four hygienic-dietary recommendations were prescribed together. Outcome measures were blinded assessed before and after the six month intervention period. A better evolution of depressive symptoms, a higher rate of responder and remitters and a lesser psychopharmacological prescription was found in the active group. Small sample size. Lacked homogeneity concerning affective disorders (major depression, dysthimia, bipolar depression). This study suggests lifestyle recommendations can be used as an effective antidepressant complementary strategy in daily practice. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Size analysis of polyglutamine protein aggregates using fluorescence detection in an analytical ultracentrifuge.

    PubMed

    Polling, Saskia; Hatters, Danny M; Mok, Yee-Foong

    2013-01-01

    Defining the aggregation process of proteins formed by poly-amino acid repeats in cells remains a challenging task due to a lack of robust techniques for their isolation and quantitation. Sedimentation velocity methodology using fluorescence detected analytical ultracentrifugation is one approach that can offer significant insight into aggregation formation and kinetics. While this technique has traditionally been used with purified proteins, it is now possible for substantial information to be collected with studies using cell lysates expressing a GFP-tagged protein of interest. In this chapter, we describe protocols for sample preparation and setting up the fluorescence detection system in an analytical ultracentrifuge to perform sedimentation velocity experiments on cell lysates containing aggregates formed by poly-amino acid repeat proteins.

  1. Neuston Trawl and Whole Water Samples: A Comparison of Microplastic Sampling Techniques in The Gulf of Maine and Their Application to Citizen-Driven Science

    NASA Astrophysics Data System (ADS)

    Kautz, M.

    2016-12-01

    Microplastic research in aquatic environments has quickly evolved over the last decade. To have meaningful inter-study comparisons, it is necessary to define methodological criteria for both the sampling and sorting of microplastics. The most common sampling method used for sea surface samples has traditionally been a neuston net (NN) tow. Originally designed for plankton collection, neuston tows allow for a large volume of water to be sampled and can be coupled with phytoplankton monitoring. The widespread use of surface nets allows for easy comparison between data sets, but the units of measurement for calculating microplastic concentration vary, from surface area m2 and Km2, to volume of water sampled, m3. Contamination by the air, equipment, or sampler is a constant concern in microplastic research. Significant in-field contamination concerns for neuston tow sampling include air exposure time, microplastics in rinse water, sampler contact, and plastic net material. Seeking to overcome the lack of contamination control and the intrinsic instrumental size limitation associated with surface tow nets, we developed an alternative sampling method. The whole water (WW) method is a one-liter grab sample of surface water adapted from College of the Atlantic and Sea Education Association (SEA) student, Marina Garland. This is the only WW method that we are aware of being used to sample microplastic. The method addresses the increasing need to explore smaller size domains, to reduce potential contamination and to incorporate citizen scientists into data collection. Less water is analyzed using the WW method, but it allows for targeted sampling of point-source pollution, intertidal, and shallow areas. The WW methodology can easily be integrated into long-term or citizen science monitoring initiatives due to its simplicity and low equipment demands. The aim of our study was to demonstrate a practical and economically feasible method for sampling microplastic abundance at the micro (10-6m) and nano (10-8m) scale that can be used in a wide variety of environments, and for assessing spatial and temporal distributions. The method has been employed in a multi-year citizen science collaboration with Adventurers and Scientists for Conservation to study microplastic worldwide.

  2. Interannual variability of dust-mass loading and composition of dust deposited on snow cover in the San Juan Mountains, CO, USA: Insights into effects on snow melt

    NASA Astrophysics Data System (ADS)

    Goldstein, H. L.; Reynolds, R. L.; Derry, J.; Kokaly, R. F.; Moskowitz, B. M.

    2017-12-01

    Dust deposited on snow cover (DOS) in the American West can enhance snow-melt rates and advance the timing of melting, which together can result in earlier-than-normal runoff and overall smaller late-season water supplies. Understanding DOS properties and how they affect the absorption of solar radiation can lead to improved snow-melt models by accounting for important dust components. Here, we report on the interannual variability of DOS-mass loading, particle size, organic matter, and iron mineralogy, and their correspondences to laboratory-measured reflectance of samples from the Swamp Angel Study Plot in the San Juan Mountains, Colorado, USA. Samples were collected near the end of spring in water year 2009 (WY09) and from WY11-WY16, when dust layers deposited throughout the year had merged into one layer at the snow surface. Dust-mass loading on snow ranged 2-64 g/m2, mostly as particles with median sizes of 13-33 micrometers. Average reflectance values of DOS varied little across total (0.4 to 2.50 µm) and visible (0.4 to 0.7 µm) wavelengths at 0.30-0.45 and 0.19-0.27, respectively. Reflectance values lacked correspondence to particle-size. Total reflectance values inversely corresponded to concentrations of (1) organic matter content (4-20 weight %; r2 = 0.71) that included forms of black carbon and locally derived material such as pollen, and (2) magnetite (0.05 to 0.13 weight %; r2 = 0.44). Magnetite may be a surrogate for related dark, light-absorbing minerals. Concentrations of crystalline ferric oxide minerals (hematite+goethite) based on magnetic properties at room-temperature did not show inverse association to visible reflectance values. These ferric oxide measures, however, did not account for the amounts of nano-sized ferric oxides known to exist in these samples. Quantification of such nano-sized particles is required to evaluate their possible effects on visible reflectance. Nonetheless, our results emphasize that reflectance values of year-end DOS layers at this site do not appear to be highly sensitive to variations in some measured DOS properties. These preliminary results cannot be broadly applied to other DOS sites in the American West on the basis of previous and ongoing studies.

  3. Seabed texture and composition changes offshore of Port Royal Sound, South Carolina before and after the dredging for beach nourishment

    NASA Astrophysics Data System (ADS)

    Xu, Kehui; Sanger, Denise; Riekerk, George; Crowe, Stacie; Van Dolah, Robert F.; Wren, P. Ansley; Ma, Yanxia

    2014-08-01

    Beach nourishment has been a strategy widely used to slow down coastal erosion in many beaches around the world. The dredging of sand at the borrow site, however, can have complicated physical, geological and ecological impacts. Our current knowledge is insufficient to make accurate predictions of sediment infilling in many dredging pits due to lack of detailed sediment data. Two sites in the sandy shoal southeast of Port Royal Sound (PRS) of South Carolina, USA, were sampled 8 times from April 2010 to March 2013; one site (defined as 'borrow site') was 2 km offshore and used as the dredging site for beach nourishment of nearby Hilton Head Island in Beaufort County, South Carolina, and the other site (defined as 'reference site') was 10 km offshore and not directly impacted by the dredging. A total of 184 surficial sediment samples were collected randomly at two sites during 8 sampling periods. Most sediments were fine sand, with an average grain size of 2.3 phi and an organic matter content less than 2%. After the dredging in December 2011-January 2012, sediments at the borrow site became finer, changing from 1.0 phi to 2.3 phi, and carbonate content decreased from 10% to 4%; changes in mud content and organic matter were small. Compared with the reference site, the borrow site experienced larger variations in mud and carbonate content. An additional 228 sub-samples were gathered from small cores collected at 5 fixed stations in the borrow site and 1 fixed station at the reference site 0, 3, 6, 9, and 12 months after the dredging; these down-core sub-samples were divided into 1-cm slices and analyzed using a laser diffraction particle size analyzer. Most cores were uniform vertically and consisted of fine sand with well to moderately well sorting and nearly symmetrical averaged skewness. Based on the analysis of grain size populations, 2 phi- and 3 phi-sized sediments were the most dynamic sand fractions in PRS. Mud deposition on shoals offshore of PRS presumably happens when offshore mud transport is prevalent and there is a following rapid sand accumulation to bury the mud. However, in this borrow site there was very little accumulation of mud. This will allow the site to be used in future nourishment projects presuming no accumulation of mud occurs in the future.

  4. Large-size monodisperse latexes as a commercial space product

    NASA Technical Reports Server (NTRS)

    Kornfeld, D. M.

    1977-01-01

    Proposed spacelab production of large-size (2-40 micron diameter) monodispersed latexes is discussed. Explanations are given for the present lack of monodisperse particles in this size range. The four main topics discussed are: (1) the potential uses of these large particle size latexes, (2) why it is necessary for the particles to have a very narrow size distribution, (3) why large amounts of these monodisperse latexes are needed, and (4) why it is necessary to go to microgravity to prepare these latexes.

  5. 7 CFR 51.1406 - Sample for grade or size determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...

  6. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    PubMed

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  7. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. ENHANCEMENT OF LEARNING ON SAMPLE SIZE CALCULATION WITH A SMARTPHONE APPLICATION: A CLUSTER-RANDOMIZED CONTROLLED TRIAL.

    PubMed

    Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz

    2017-01-01

    Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.

  9. Analysis of spatial patterns informs community assembly and sampling requirements for Collembola in forest soils

    NASA Astrophysics Data System (ADS)

    Dirilgen, Tara; Juceviča, Edite; Melecis, Viesturs; Querner, Pascal; Bolger, Thomas

    2018-01-01

    The relative importance of niche separation, non-equilibrial and neutral models of community assembly has been a theme in community ecology for many decades with none appearing to be applicable under all circumstances. In this study, Collembola species abundances were recorded over eleven consecutive years in a spatially explicit grid and used to examine (i) whether observed beta diversity differed from that expected under conditions of neutrality, (ii) whether sampling points differed in their relative contributions to overall beta diversity, and (iii) the number of samples required to provide comparable estimates of species richness across three forest sites. Neutrality could not be rejected for 26 of the forest by year combinations. However, there is a trend toward greater structure in the oldest forest, where beta diversity was greater than predicted by neutrality on five of the eleven sampling dates. The lack of difference in individual- and sample-based rarefaction curves also suggests randomness in the system at this particular scale of investigation. It seems that Collembola communities are not spatially aggregated and assembly is driven primarily by neutral processes particularly in the younger two sites. Whether this finding is due to small sample size or unaccounted for environmental variables cannot be determined. Variability between dates and sites illustrates the potential of drawing incorrect conclusions if data are collected at a single site and a single point in time.

  10. Impact of intermittent fasting on the lipid profile: Assessment associated with diet and weight loss.

    PubMed

    Santos, Heitor O; Macedo, Rodrigo C O

    2018-04-01

    Intermittent fasting, whose proposed benefits include the improvement of lipid profile and the body weight loss, has gained considerable scientific and popular repercussion. This review aimed to consolidate studies that analyzed the lipid profile in humans before and after intermittent fasting period through a detailed review; and to propose the physiological mechanism, considering the diet and the body weight loss. Normocaloric and hypocaloric intermittent fasting may be a dietary method to aid in the improvement of the lipid profile in healthy, obese and dyslipidemic men and women by reducing total cholesterol, LDL, triglycerides and increasing HDL levels. However, the majority of studies that analyze the intermittent fasting impacts on the lipid profile and body weight loss are observational based on Ramadan fasting, which lacks large sample and detailed information about diet. Randomized clinical trials with a larger sample size are needed to evaluate the IF effects mainly in patients with dyslipidemia. Copyright © 2018 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.

  11. Layer Additive Production or Manufacturing of Thick Sections of Ti-6Al-4V by Selective Electron Beam Melting (SEBM)

    NASA Astrophysics Data System (ADS)

    Sun, Y. Y.; Gulizia, S.; Fraser, D.; Oh, C. H.; Lu, S. L.; Qian, M.

    2017-10-01

    Selective electron beam melting (SEBM) is an established layer additive manufacturing or production process for small-to-medium-sized components of Ti-6Al-4V. Current literature data on SEBM of Ti-6Al-4V are, however, based principally on thin-section (<1″; mostly <0.5″) samples or components. In this research, 34-mm-thick (1.34″) Ti-6Al-4V block samples were produced through use of default Arcam SEBM parameters and characterized versus section thickness. High densities (99.4-99.8%) were achieved across different thick sections, but markedly inhomogeneous microstructures also developed. Nonetheless, the tensile properties measured from 27 different thickness-width positions all clearly satisfied the minimum requirements for mill-annealed Ti-6Al-4V. SEBM produced highly dense thick sections of Ti-6Al-4V with good tensile properties. Large lack-of-fusion defects (80-250 µm) were found to be mainly responsible for variations in tensile properties.

  12. New evidence of the effects of education on health in the US: compulsory schooling laws revisited.

    PubMed

    Fletcher, Jason M

    2015-02-01

    Estimating the effects of education on health and mortality has been the subject of intense debate and competing findings and summaries. The original Lleras-Muney (2005) methods utilizing state compulsory schooling laws as instrumental variables for completed education and US data to establish effects of education on mortality have been extended to several countries, with mixed and often null findings. However, additional US studies have lagged behind due to small samples and/or lack of mortality information in many available datasets. This paper uses a large, novel survey from the AARP on several hundred thousand respondents to present new evidence of the effects of education on a variety of health outcomes. Results suggest that education may have a role in improving several dimensions of health, such as self reports, cardiovascular outcomes, and weight outcomes. Other results appear underpowered, suggesting that further use of this methodology may require even larger, and potentially unattainable, sample sizes in the US. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Key issues to consider and innovative ideas on fall prevention in the geriatric department of a teaching hospital.

    PubMed

    Chan, Daniel Ky; Sherrington, Cathie; Naganathan, Vasi; Xu, Ying Hua; Chen, Jack; Ko, Anita; Kneebone, Ian; Cumming, Robert

    2018-06-01

    Falls in hospital are common and up to 70% result in injury, leading to increased length of stay and accounting for 10% of patient safety-related deaths. Yet, high-quality evidence guiding best practice is lacking. Fall prevention strategies have worked in some trials but not in others. Differences in study setting (acute, subacute, rehabilitation) and sampling of patients (cognitively intact or impaired) may explain the difference in results. This article discusses these important issues and describes the strategies to prevent falls in the acute hospital setting we have studied, which engage the cognitively impaired who are more likely to fall. We have used video clips rather than verbal instruction to educate patients, and are optimistic that this approach may work. We have also explored the option of co-locating high fall risk patients in a close observation room for supervision, with promising results. Further studies, using larger sample sizes are required to confirm our findings. © 2018 AJA Inc.

  14. A New Approach of Juvenile Age Estimation using Measurements of the Ilium and Multivariate Adaptive Regression Splines (MARS) Models for Better Age Prediction.

    PubMed

    Corron, Louise; Marchal, François; Condemi, Silvana; Chaumoître, Kathia; Adalian, Pascal

    2017-01-01

    Juvenile age estimation methods used in forensic anthropology generally lack methodological consistency and/or statistical validity. Considering this, a standard approach using nonparametric Multivariate Adaptive Regression Splines (MARS) models were tested to predict age from iliac biometric variables of male and female juveniles from Marseilles, France, aged 0-12 years. Models using unidimensional (length and width) and bidimensional iliac data (module and surface) were constructed on a training sample of 176 individuals and validated on an independent test sample of 68 individuals. Results show that MARS prediction models using iliac width, module and area give overall better and statistically valid age estimates. These models integrate punctual nonlinearities of the relationship between age and osteometric variables. By constructing valid prediction intervals whose size increases with age, MARS models take into account the normal increase of individual variability. MARS models can qualify as a practical and standardized approach for juvenile age estimation. © 2016 American Academy of Forensic Sciences.

  15. Investigating the intermediates in the reaction of ribonucleoside triphosphate reductase from Lactobacillus leichmannii : An application of HF EPR-RFQ technology

    NASA Astrophysics Data System (ADS)

    Manzerova, Julia; Krymov, Vladimir; Gerfen, Gary J.

    2011-12-01

    In this investigation high-frequency electron paramagnetic resonance spectroscopy (HFEPR) in conjunction with innovative rapid freeze-quench (RFQ) technology is employed to study the exchange-coupled thiyl radical-cob(II)alamin system in ribonucleotide reductase from a prokaryote Lactobacillus leichmannii. The size of the exchange coupling ( Jex) and the values of the thiyl radical g tensor are refined, while confirming the previously determined (Gerfen et al. (1996) [20]) distance between the paramagnets. Conclusions relevant to ribonucleotide reductase catalysis and the architecture of the active site are presented. A key part of this work has been the development of a unique RFQ apparatus for the preparation of millisecond quench time RFQ samples which can be packed into small (0.5 mm ID) sample tubes used for CW and pulsed HFEPR - lack of this ability has heretofore precluded such studies. The technology is compatible with a broad range of spectroscopic techniques and can be readily adopted by other laboratories.

  16. Scene-based Shack-Hartmann wavefront sensor for light-sheet microscopy

    NASA Astrophysics Data System (ADS)

    Lawrence, Keelan; Liu, Yang; Dale, Savannah; Ball, Rebecca; VanLeuven, Ariel J.; Sornborger, Andrew; Lauderdale, James D.; Kner, Peter

    2018-02-01

    Light-sheet microscopy is an ideal imaging modality for long-term live imaging in model organisms. However, significant optical aberrations can be present when imaging into an organism that is hundreds of microns or greater in size. To measure and correct optical aberrations, an adaptive optics system must be incorporated into the microscope. Many biological samples lack point sources that can be used as guide stars with conventional Shack-Hartmann wavefront sensors. We have developed a scene-based Shack-Hartmann wavefront sensor for measuring the optical aberrations in a light-sheet microscopy system that does not require a point-source and can measure the aberrations for different parts of the image. The sensor has 280 lenslets inside the pupil, creates an image from each lenslet with a 500 micron field of view and a resolution of 8 microns, and has a resolution for the wavefront gradient of 75 milliradians per lenslet. We demonstrate the system on both fluorescent bead samples and zebrafish embryos.

  17. Population structure and connectivity of tiger sharks (Galeocerdo cuvier) across the Indo-Pacific Ocean basin

    PubMed Central

    Williams, Samuel M.; Otway, Nicholas M.; Nielsen, Einar E.; Maher, Safia L.; Bennett, Mike B.; Ovenden, Jennifer R.

    2017-01-01

    Population genetic structure using nine polymorphic nuclear microsatellite loci was assessed for the tiger shark (Galeocerdo cuvier) at seven locations across the Indo-Pacific, and one location in the southern Atlantic. Genetic analyses revealed considerable genetic structuring (FST > 0.14, p < 0.001) between all Indo-Pacific locations and Brazil. By contrast, no significant genetic differences were observed between locations from within the Pacific or Indian Oceans, identifying an apparent large, single Indo-Pacific population. A lack of differentiation between tiger sharks sampled in Hawaii and other Indo-Pacific locations identified herein is in contrast to an earlier global tiger shark nDNA study. The results of our power analysis provide evidence to suggest that the larger sample sizes used here negated any weak population subdivision observed previously. These results further highlight the need for cross-jurisdictional efforts to manage the sustainable exploitation of large migratory sharks like G. cuvier. PMID:28791159

  18. Population structure and connectivity of tiger sharks (Galeocerdo cuvier) across the Indo-Pacific Ocean basin.

    PubMed

    Holmes, Bonnie J; Williams, Samuel M; Otway, Nicholas M; Nielsen, Einar E; Maher, Safia L; Bennett, Mike B; Ovenden, Jennifer R

    2017-07-01

    Population genetic structure using nine polymorphic nuclear microsatellite loci was assessed for the tiger shark ( Galeocerdo cuvier ) at seven locations across the Indo-Pacific, and one location in the southern Atlantic. Genetic analyses revealed considerable genetic structuring ( F ST  > 0.14, p  < 0.001) between all Indo-Pacific locations and Brazil. By contrast, no significant genetic differences were observed between locations from within the Pacific or Indian Oceans, identifying an apparent large, single Indo-Pacific population. A lack of differentiation between tiger sharks sampled in Hawaii and other Indo-Pacific locations identified herein is in contrast to an earlier global tiger shark nDNA study. The results of our power analysis provide evidence to suggest that the larger sample sizes used here negated any weak population subdivision observed previously. These results further highlight the need for cross-jurisdictional efforts to manage the sustainable exploitation of large migratory sharks like G. cuvier .

  19. Application of SEC-ICP-MS for comparative analyses of metal-containing species in cancerous and healthy human thyroid samples.

    PubMed

    Boulyga, Sergei F; Loreti, Valeria; Bettmer, Jörg; Heumann, Klaus G

    2004-09-01

    Size exclusion chromatography (SEC) was coupled on-line to inductively coupled plasma mass spectrometry (ICP-MS) for speciation study of trace metals in cancerous thyroid tissues in comparison to healthy thyroids aimed to estimation of changes in metalloprotein speciation in pathological tissue. The study showed a presence of species binding Cu, Zn, Cd and Pb in healthy thyroid tissue with a good reproducibility of chromatographic results, whereas the same species could not be detected in cancerous tissues. Thus, remarkable differences with respect to metal-binding species were revealed between healthy and pathological thyroid samples, pointing out a completely different distribution of trace metals in cancerous tissues. The metal-binding species could not be identified in the frame of this work because of a lack of appropriate standards. Nevertheless, the results obtained confirm the suitability of SEC-ICP-MS for monitoring of changes in trace metal distribution in cancerous tissue and will help to better understand the role of metal-containing species in thyroid pathology.

  20. Challenges in collecting clinical samples for research from pregnant women of South Asian origin: evidence from a UK study.

    PubMed

    Neelotpol, Sharmind; Hay, Alastair W M; Jolly, A Jim; Woolridge, Mike W

    2016-08-31

    To recruit South Asian pregnant women, living in the UK, into a clinicoepidemiological study for the collection of lifestyle survey data and antenatal blood and to retain the women for the later collection of cord blood and meconium samples from their babies for biochemical analysis. A longitudinal study recruiting pregnant women of South Asian and Caucasian origin living in the UK. Recruitment of the participants, collection of clinical samples and survey data took place at the 2 sites within a single UK Northern Hospital Trust. Pregnant women of South Asian origin (study group, n=98) and of Caucasian origin (comparison group, n=38) living in Leeds, UK. Among the participants approached, 81% agreed to take part in the study while a 'direct approach' method was followed. The retention rate of the participants was a remarkable 93.4%. The main challenges in recruiting the ethnic minority participants were their cultural and religious conservativeness, language barrier, lack of interest and feeling of extra 'stress' in taking part in research. The chief investigator developed an innovative participant retention method, associated with the women's cultural and religious practices. The method proved useful in retaining the participants for about 5 months and in enabling successful collection of clinical samples from the same mother-baby pairs. The collection of clinical samples and lifestyle data exceeded the calculated sample size required to give the study sufficient power. The numbers of samples obtained were: maternal blood (n=171), cord blood (n=38), meconium (n=176), lifestyle questionnaire data (n=136) and postnatal records (n=136). Recruitment and retention of participants, according to the calculated sample size, ensured sufficient power and success for a clinicoepidemiological study. Results suggest that development of trust and confidence between the participant and the researcher is the key to the success of a clinical and epidemiological study involving ethnic minorities. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  1. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  2. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  3. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Automated particle identification through regression analysis of size, shape and colour

    NASA Astrophysics Data System (ADS)

    Rodriguez Luna, J. C.; Cooper, J. M.; Neale, S. L.

    2016-04-01

    Rapid point of care diagnostic tests and tests to provide therapeutic information are now available for a range of specific conditions from the measurement of blood glucose levels for diabetes to card agglutination tests for parasitic infections. Due to a lack of specificity these test are often then backed up by more conventional lab based diagnostic methods for example a card agglutination test may be carried out for a suspected parasitic infection in the field and if positive a blood sample can then be sent to a lab for confirmation. The eventual diagnosis is often achieved by microscopic examination of the sample. In this paper we propose a computerized vision system for aiding in the diagnostic process; this system used a novel particle recognition algorithm to improve specificity and speed during the diagnostic process. We will show the detection and classification of different types of cells in a diluted blood sample using regression analysis of their size, shape and colour. The first step is to define the objects to be tracked by a Gaussian Mixture Model for background subtraction and binary opening and closing for noise suppression. After subtracting the objects of interest from the background the next challenge is to predict if a given object belongs to a certain category or not. This is a classification problem, and the output of the algorithm is a Boolean value (true/false). As such the computer program should be able to "predict" with reasonable level of confidence if a given particle belongs to the kind we are looking for or not. We show the use of a binary logistic regression analysis with three continuous predictors: size, shape and color histogram. The results suggest this variables could be very useful in a logistic regression equation as they proved to have a relatively high predictive value on their own.

  5. An Exploratory Analysis of Projection-Standard Variables (Screen Size, Image Size and Image Contrast) in Terms of Their Effects on the Speed and Accuracy of Discrimination. Final Report.

    ERIC Educational Resources Information Center

    Metcalf, Richard M.

    Although there has been previous research concerned with image size, brightness, and contrast in projection standards, the work has lacked careful conceptualization. In this study, size was measured in terms of the visual angle subtended by the material, brightness was stated in foot-lamberts, and contrast was defined as the ratio of the…

  6. The Impact of Brain-Based Strategies: One School's Perspective

    ERIC Educational Resources Information Center

    Hodges, Jane Allen

    2013-01-01

    Research has shown student inattention, off-task behaviors, and lack of listening skills in the classroom can impact progress in reading, math, and language development. Lack of verbal interaction in home environments, variations in learning and teaching modalities, and larger class sizes contribute to the difficulties students have in developing…

  7. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  8. Consistent assignment of nursing staff to residents in nursing homes: a critical review of conceptual and methodological issues.

    PubMed

    Roberts, Tonya; Nolet, Kimberly; Bowers, Barbara

    2015-06-01

    Consistent assignment of nursing staff to residents is promoted by a number of national organizations as a strategy for improving nursing home quality and is included in pay for performance schedules in several states. However, research has shown inconsistent effects of consistent assignment on quality outcomes. In order to advance the state of the science of research on consistent assignment and inform current practice and policy, a literature review was conducted to critique conceptual and methodological understandings of consistent assignment. Twenty original research reports of consistent assignment in nursing homes were found through a variety of search strategies. Consistent assignment was conceptualized and operationalized in multiple ways with little overlap from study to study. There was a lack of established methods to measure consistent assignment. Methodological limitations included a lack of control and statistical analyses of group differences in experimental-level studies, small sample sizes, lack of attention to confounds in multicomponent interventions, and outcomes that were not theoretically linked. Future research should focus on developing a conceptual understanding of consistent assignment focused on definition, measurement, and links to outcomes. To inform current policies, testing consistent assignment should include attention to contexts within and levels at which it is most effective. Published by Oxford University Press on behalf of the Gerontological Society of America 2013.

  9. Sample size determination in group-sequential clinical trials with two co-primary endpoints

    PubMed Central

    Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

    2014-01-01

    We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

  10. Copper NPs decorated titania: A novel synthesis by high energy US with a study of the photocatalytic activity under visible light.

    PubMed

    Stucchi, Marta; Bianchi, Claudia L; Pirola, Carlo; Cerrato, Giuseppina; Morandi, Sara; Argirusis, Christos; Sourkouni, Georgia; Naldoni, Alberto; Capucci, Valentino

    2016-07-01

    The most important drawback of the use of TiO2 as photocatalyst is its lack of activity under visible light. To overcome this problem, the surface modification of commercial micro-sized TiO2 by means of high-energy ultrasound (US), employing CuCl2 as precursor molecule to obtain both metallic copper as well as copper oxides species at the TiO2 surface, is here. We have prepared samples with different copper content, in order to evaluate its impact on the photocatalytic performances of the semiconductor, and studied in particular the photodegradation in the gas phase of some volatile organic molecules (VOCs), namely acetone and acetaldehyde. We used a LED lamp in order to have only the contribution of the visible wavelengths to the TiO2 activation (typical LED lights have no emission in the UV region). We employed several techniques (i.e., HR-TEM, XRD, FT-IR and UV-Vis) in order to characterize the prepared samples, thus evidencing different sample morphologies as a function of the various copper content, with a coherent correlation between them and the photocatalytic results. Firstly, we demonstrated the possibility to use US to modify the TiO2, even when it is commercial and micro-sized as well; secondly, by avoiding completely the UV irradiation, we confirmed that pure TiO2 is not activated by visible light. On the other hand, we showed that copper metal and metal oxides nanoparticles strongly and positively affect its photocatalytic activity. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Genome-wide association analysis accounting for environmental factors through propensity-score matching: application to stressful live events in major depressive disorder.

    PubMed

    Power, Robert A; Cohen-Woods, Sarah; Ng, Mandy Y; Butler, Amy W; Craddock, Nick; Korszun, Ania; Jones, Lisa; Jones, Ian; Gill, Michael; Rice, John P; Maier, Wolfgang; Zobel, Astrid; Mors, Ole; Placentino, Anna; Rietschel, Marcella; Aitchison, Katherine J; Tozzi, Federica; Muglia, Pierandrea; Breen, Gerome; Farmer, Anne E; McGuffin, Peter; Lewis, Cathryn M; Uher, Rudolf

    2013-09-01

    Stressful life events are an established trigger for depression and may contribute to the heterogeneity within genome-wide association analyses. With depression cases showing an excess of exposure to stressful events compared to controls, there is difficulty in distinguishing between "true" cases and a "normal" response to a stressful environment. This potential contamination of cases, and that from genetically at risk controls that have not yet experienced environmental triggers for onset, may reduce the power of studies to detect causal variants. In the RADIANT sample of 3,690 European individuals, we used propensity score matching to pair cases and controls on exposure to stressful life events. In 805 case-control pairs matched on stressful life event, we tested the influence of 457,670 common genetic variants on the propensity to depression under comparable level of adversity with a sign test. While this analysis produced no significant findings after genome-wide correction for multiple testing, we outline a novel methodology and perspective for providing environmental context in genetic studies. We recommend contextualizing depression by incorporating environmental exposure into genome-wide analyses as a complementary approach to testing gene-environment interactions. Possible explanations for negative findings include a lack of statistical power due to small sample size and conditional effects, resulting from the low rate of adequate matching. Our findings underscore the importance of collecting information on environmental risk factors in studies of depression and other complex phenotypes, so that sufficient sample sizes are available to investigate their effect in genome-wide association analysis. Copyright © 2013 Wiley Periodicals, Inc.

  12. The 11 micron Silicon Carbide Feature in Carbon Star Shells

    NASA Technical Reports Server (NTRS)

    Speck, A. K.; Barlow, M. J.; Skinner, C. J.

    1996-01-01

    Silicon carbide (SiC) is known to form in circumstellar shells around carbon stars. SiC can come in two basic types - hexagonal alpha-SiC or cubic beta-SiC. Laboratory studies have shown that both types of SiC exhibit an emission feature in the 11-11.5 micron region, the size and shape of the feature varying with type, size and shape of the SiC grains. Such a feature can be seen in the spectra of carbon stars. Silicon carbide grains have also been found in meteorites. The aim of the current work is to identity the type(s) of SiC found in circumstellar shells and how they might relate to meteoritic SiC samples. We have used the CGS3 spectrometer at the 3.8 m UKIRT to obtain 7.5-13.5 micron spectra of 31 definite or proposed carbon stars. After flux-calibration, each spectrum was fitted using a chi(exp 2)-minimisation routine equipped with the published laboratory optical constants of six different samples of small SiC particles, together with the ability to fit the underlying continuum using a range of grain emissivity laws. It was found that the majority of observed SiC emission features could only be fitted by alpha-SiC grains. The lack of beta-SiC is surprising, as this is the form most commonly found in meteorites. Included in the sample were four sources, all of which have been proposed to be carbon stars, that appear to show the SiC feature in absorption.

  13. Novel intervention with acupuncture for anorexia and cachexia in patients with gastrointestinal tract cancers: a feasibility study.

    PubMed

    Yoon, Saunjoo L; Grundmann, Oliver; Williams, Joseph J; Carriere, Gwen

    2015-03-01

    To investigate the feasibility of using acupuncture as a complementary intervention to existing treatments and to evaluate the efficacy of acupuncture in improving appetite and slowing weight loss with patients with gastrointestinal (GI) tract cancers. 
 One-group pre- and postintervention feasibility study. 
 Outpatient clinic for patients with cancer and a community setting, both in Florida. 
 A convenience sample of seven adults with GI cancer.
 Eight acupuncture sessions were provided during eight weeks. Data were collected using the visual analog scale (VAS) for appetite, Simplified Nutritional Appetite Questionnaire (SNAQ), Karnofsky Performance Status, and bioelectrical impedance analysis. 
 Appetite, weight, attrition rate.
 Seven patients with a mean age of 61 years completed the intervention. Acupuncture was well accepted, feasible, and safe without any reported side effects. Appetite showed improvement, with an average score of 3.04 on the VAS and 4.14 on SNAQ compared to the preintervention scores. The average weight loss was 1.32% compared to the baseline during an eight-week period. 
 The acupuncture intervention was feasible and indicated positive outcomes. Because of the small sample size and lack of a control group, statistical significance of effectiveness was not determined. Acupuncture seemed to improve appetite and slow weight loss in patients with GI cancers, so additional studies with a larger sample size and a variety of cancers are warranted. 
 Oncology nurses are uniquely able to equip patients with information about complementary therapy modalities, such as acupuncture, which is a promising way to improve appetite and slow weight loss in patients with GI cancers.


  14. Detection of defects in laser powder deposition (LPD) components by pulsed laser transient thermography

    NASA Astrophysics Data System (ADS)

    Santospirito, S. P.; Słyk, Kamil; Luo, Bin; Łopatka, Rafał; Gilmour, Oliver; Rudlin, John

    2013-05-01

    Detection of defects in Laser Powder Deposition (LPD) produced components has been achieved by laser thermography. An automatic in-process NDT defect detection software system has been developed for the analysis of laser thermography to automatically detect, reliably measure and then sentence defects in individual beads of LPD components. A deposition path profile definition has been introduced so all laser powder deposition beads can be modeled, and the inspection system has been developed to automatically generate an optimized inspection plan in which sampling images follow the deposition track, and automatically control and communicate with robot-arms, the source laser and cameras to implement image acquisition. Algorithms were developed so that the defect sizes can be correctly evaluated and these have been confirmed using test samples. Individual inspection images can also be stitched together for a single bead, a layer of beads or multiple layers of beads so that defects can be mapped through the additive process. A mathematical model was built up to analyze and evaluate the movement of heat throughout the inspection bead. Inspection processes were developed and positional and temporal gradient algorithms have been used to measure the flaw sizes. Defect analysis is then performed to determine if the defect(s) can be further classified (crack, lack of fusion, porosity) and the sentencing engine then compares the most significant defect or group of defects against the acceptance criteria - independent of human decisions. Testing on manufactured defects from the EC funded INTRAPID project has successful detected and correctly sentenced all samples.

  15. The Quality of Reporting of Measures of Precision in Animal Experiments in Implant Dentistry: A Methodological Study.

    PubMed

    Faggion, Clovis Mariano; Aranda, Luisiana; Diaz, Karla Tatiana; Shih, Ming-Chieh; Tu, Yu-Kang; Alarcón, Marco Antonio

    2016-01-01

    Information on precision of treatment-effect estimates is pivotal for understanding research findings. In animal experiments, which provide important information for supporting clinical trials in implant dentistry, inaccurate information may lead to biased clinical trials. The aim of this methodological study was to determine whether sample size calculation, standard errors, and confidence intervals for treatment-effect estimates are reported accurately in publications describing animal experiments in implant dentistry. MEDLINE (via PubMed), Scopus, and SciELO databases were searched to identify reports involving animal experiments with dental implants published from September 2010 to March 2015. Data from publications were extracted into a standardized form with nine items related to precision of treatment estimates and experiment characteristics. Data selection and extraction were performed independently and in duplicate, with disagreements resolved by discussion-based consensus. The chi-square and Fisher exact tests were used to assess differences in reporting according to study sponsorship type and impact factor of the journal of publication. The sample comprised reports of 161 animal experiments. Sample size calculation was reported in five (2%) publications. P values and confidence intervals were reported in 152 (94%) and 13 (8%) of these publications, respectively. Standard errors were reported in 19 (12%) publications. Confidence intervals were better reported in publications describing industry-supported animal experiments (P = .03) and with a higher impact factor (P = .02). Information on precision of estimates is rarely reported in publications describing animal experiments in implant dentistry. This lack of information makes it difficult to evaluate whether the translation of animal research findings to clinical trials is adequate.

  16. Lack of Set Size Effects in Spatial Updating: Evidence for Offline Updating

    ERIC Educational Resources Information Center

    Hodgson, Eric; Waller, David

    2006-01-01

    Four experiments required participants to keep track of the locations of (i.e., update) 1, 2, 3, 4, 6, 8, 10, or 15 target objects after rotating. Across all conditions, updating was unaffected by set size. Although some traditional set size effects (i.e., a linear increase of latency with memory load) were observed under some conditions, these…

  17. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  18. Three-Dimensional Grain Shape-Fabric from Unconsolidated Pyroclastic Density Current Deposits: Implications for Extracting Flow Direction and Insights on Rheology

    NASA Astrophysics Data System (ADS)

    Hawkins, T. T.; Brand, B. D.; Sarrochi, D.; Pollock, N.

    2016-12-01

    One of the greatest challenges volcanologists face is the ability to extrapolate information about eruption dynamics and emplacement conditions from deposits. Pyroclastic density current (PDC) deposits are particularly challenging given the wide range of initial current conditions, (e.g., granular, fluidized, concentrated, dilute), and rapid flow transformations due to interaction with evolving topography. Analysis of particle shape-fabric can be used to determine flow direction, and may help to understand the rheological characteristics of the flows. However, extracting shape-fabric information from outcrop (2D) apparent fabric is limited, especially when outcrop exposure is incomplete or lacks context. To better understand and quantify the complex flow dynamics reflected in PDC deposits, we study the complete shape-fabric data in 3D using oriented samples. In the field, the prospective sample is carved from the unconsolidated deposit in blocks, the dimensions of which depend on the average clast size in the sample. The sample is saturated in situ with a water-based sodium silicate solution, then wrapped in plaster-soaked gauze to form a protective cast. The orientation of the sample is recorded on the block faces. The samples dry for five days and are then extracted in intact blocks. In the lab, the sample is vacuum impregnated with sodium silicate and cured in an oven. The fully lithified sample is first cut along the plan view to identify orientations of the long axes of the grains (flow direction), and then cut in the two plains perpendicular to grain elongation. 3D fabric analysis is performed using high resolution images of the cut-faces using computer assisted image analysis software devoted to shape-fabric analysis. Here we present the results of samples taken from the 18 May 1980 PDC deposit facies, including massive, diffuse-stratified and cross-stratified lapilli tuff. We show a relationship between the strength of iso-orientation of the elongated particles and different facies architectures, which is used to interpret rheological conditions of the flow. We chose the 18 May PDC deposits because their well-exposed and well-studied outcrops provide context, which allow us to test the method and extract information useful for interpreting ancient deposits that lack context.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir

    Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less

  20. Study Design Rigor in Animal-Experimental Research Published in Anesthesia Journals.

    PubMed

    Hoerauf, Janine M; Moss, Angela F; Fernandez-Bustamante, Ana; Bartels, Karsten

    2018-01-01

    Lack of reproducibility of preclinical studies has been identified as an impediment for translation of basic mechanistic research into effective clinical therapies. Indeed, the National Institutes of Health has revised its grant application process to require more rigorous study design, including sample size calculations, blinding procedures, and randomization steps. We hypothesized that the reporting of such metrics of study design rigor has increased over time for animal-experimental research published in anesthesia journals. PubMed was searched for animal-experimental studies published in 2005, 2010, and 2015 in primarily English-language anesthesia journals. A total of 1466 publications were graded on the performance of sample size estimation, randomization, and blinding. Cochran-Armitage test was used to assess linear trends over time for the primary outcome of whether or not a metric was reported. Interrater agreement for each of the 3 metrics (power, randomization, and blinding) was assessed using the weighted κ coefficient in a 10% random sample of articles rerated by a second investigator blinded to the ratings of the first investigator. A total of 1466 manuscripts were analyzed. Reporting for all 3 metrics of experimental design rigor increased over time (2005 to 2010 to 2015): for power analysis, from 5% (27/516), to 12% (59/485), to 17% (77/465); for randomization, from 41% (213/516), to 50% (243/485), to 54% (253/465); and for blinding, from 26% (135/516), to 38% (186/485), to 47% (217/465). The weighted κ coefficients and 98.3% confidence interval indicate almost perfect agreement between the 2 raters beyond that which occurs by chance alone (power, 0.93 [0.85, 1.0], randomization, 0.91 [0.85, 0.98], and blinding, 0.90 [0.84, 0.96]). Our hypothesis that reported metrics of rigor in animal-experimental studies in anesthesia journals have increased during the past decade was confirmed. More consistent reporting, or explicit justification for absence, of sample size calculations, blinding techniques, and randomization procedures could better enable readers to evaluate potential sources of bias in animal-experimental research manuscripts. Future studies should assess whether such steps lead to improved translation of animal-experimental anesthesia research into successful clinical trials.

  1. Systematic review of the evidence for Trails B cut-off scores in assessing fitness-to-drive

    PubMed Central

    Roy, Mononita; Molnar, Frank

    2013-01-01

    Background Fitness-to-drive guidelines recommend employing the Trail Making B Test (a.k.a. Trails B), but do not provide guidance regarding cut-off scores. There is ongoing debate regarding the optimal cut-off score on the Trails B test. The objective of this study was to address this controversy by systematically reviewing the evidence for specific Trails B cut-off scores (e.g., cut-offs in both time to completion and number of errors) with respect to fitness-to-drive. Methods Systematic review of all prospective cohort, retrospective cohort, case-control, correlation, and cross-sectional studies reporting the ability of the Trails B to predict driving safety that were published in English-language, peer-reviewed journals. Results Forty-seven articles were reviewed. None of the articles justified sample sizes via formal calculations. Cut-off scores reported based on research include: 90 seconds, 133 seconds, 147 seconds, 180 seconds, and < 3 errors. Conclusions There is support for the previously published Trails B cut-offs of 3 minutes or 3 errors (the ‘3 or 3 rule’). Major methodological limitations of this body of research were uncovered including (1) lack of justification of sample size leaving studies open to Type II error (i.e., false negative findings), and (2) excessive focus on associations rather than clinically useful cut-off scores. PMID:23983828

  2. Statistical Design for Biospecimen Cohort Size in Proteomics-based Biomarker Discovery and Verification Studies

    PubMed Central

    Skates, Steven J.; Gillette, Michael A.; LaBaer, Joshua; Carr, Steven A.; Anderson, N. Leigh; Liebler, Daniel C.; Ransohoff, David; Rifai, Nader; Kondratovich, Marina; Težak, Živana; Mansfield, Elizabeth; Oberg, Ann L.; Wright, Ian; Barnes, Grady; Gail, Mitchell; Mesri, Mehdi; Kinsinger, Christopher R.; Rodriguez, Henry; Boja, Emily S.

    2014-01-01

    Protein biomarkers are needed to deepen our understanding of cancer biology and to improve our ability to diagnose, monitor and treat cancers. Important analytical and clinical hurdles must be overcome to allow the most promising protein biomarker candidates to advance into clinical validation studies. Although contemporary proteomics technologies support the measurement of large numbers of proteins in individual clinical specimens, sample throughput remains comparatively low. This problem is amplified in typical clinical proteomics research studies, which routinely suffer from a lack of proper experimental design, resulting in analysis of too few biospecimens to achieve adequate statistical power at each stage of a biomarker pipeline. To address this critical shortcoming, a joint workshop was held by the National Cancer Institute (NCI), National Heart, Lung and Blood Institute (NHLBI), and American Association for Clinical Chemistry (AACC), with participation from the U.S. Food and Drug Administration (FDA). An important output from the workshop was a statistical framework for the design of biomarker discovery and verification studies. Herein, we describe the use of quantitative clinical judgments to set statistical criteria for clinical relevance, and the development of an approach to calculate biospecimen sample size for proteomic studies in discovery and verification stages prior to clinical validation stage. This represents a first step towards building a consensus on quantitative criteria for statistical design of proteomics biomarker discovery and verification research. PMID:24063748

  3. Statistical design for biospecimen cohort size in proteomics-based biomarker discovery and verification studies.

    PubMed

    Skates, Steven J; Gillette, Michael A; LaBaer, Joshua; Carr, Steven A; Anderson, Leigh; Liebler, Daniel C; Ransohoff, David; Rifai, Nader; Kondratovich, Marina; Težak, Živana; Mansfield, Elizabeth; Oberg, Ann L; Wright, Ian; Barnes, Grady; Gail, Mitchell; Mesri, Mehdi; Kinsinger, Christopher R; Rodriguez, Henry; Boja, Emily S

    2013-12-06

    Protein biomarkers are needed to deepen our understanding of cancer biology and to improve our ability to diagnose, monitor, and treat cancers. Important analytical and clinical hurdles must be overcome to allow the most promising protein biomarker candidates to advance into clinical validation studies. Although contemporary proteomics technologies support the measurement of large numbers of proteins in individual clinical specimens, sample throughput remains comparatively low. This problem is amplified in typical clinical proteomics research studies, which routinely suffer from a lack of proper experimental design, resulting in analysis of too few biospecimens to achieve adequate statistical power at each stage of a biomarker pipeline. To address this critical shortcoming, a joint workshop was held by the National Cancer Institute (NCI), National Heart, Lung, and Blood Institute (NHLBI), and American Association for Clinical Chemistry (AACC) with participation from the U.S. Food and Drug Administration (FDA). An important output from the workshop was a statistical framework for the design of biomarker discovery and verification studies. Herein, we describe the use of quantitative clinical judgments to set statistical criteria for clinical relevance and the development of an approach to calculate biospecimen sample size for proteomic studies in discovery and verification stages prior to clinical validation stage. This represents a first step toward building a consensus on quantitative criteria for statistical design of proteomics biomarker discovery and verification research.

  4. Clinimetric evaluation of shoulder disability questionnaires: a systematic review of the literature

    PubMed Central

    Bot, S; Terwee, C; van der Windt, D A W M; Bouter, L; Dekker, J; de Vet, H C W

    2004-01-01

    Methods: Systematic literature searches were performed to identify self administered shoulder disability questionnaires. A checklist was developed to evaluate and compare the clinimetric quality of the instruments. Results: Two reviewers identified and evaluated 16 questionnaires by our checklist. Most studies were found for the Disability of the Arm, Shoulder, and Hand scale (DASH), the Shoulder Pain and Disability Index (SPADI), and the American Shoulder and Elbow Surgeons Standardised Shoulder Assessment Form (ASES). None of the questionnaires demonstrated satisfactory results for all properties. Most questionnaires claim to measure several domains (for example, pain, physical, emotional, and social functioning), yet dimensionality was studied in only three instruments. The internal consistency was calculated for seven questionnaires and only one received an adequate rating. Twelve questionnaires received positive ratings for construct validity, although depending on the population studied, four of these questionnaires received poor ratings too. Seven questionnaires were shown to have adequate test-retest reliability (ICC >0.70), but five questionnaires were tested inadequately. In most clinimetric studies only small sample sizes (n<43) were used. Nearly all publications lacked information on the interpretation of scores. Conclusion: The DASH, SPADI, and ASES have been studied most extensively, and yet even published validation studies of these instruments have limitations in study design, sample sizes, or evidence for dimensionality. Overall, the DASH received the best ratings for its clinimetric properties. PMID:15020324

  5. Estimating Dermal Transfer of Copper Particles from the ...

    EPA Pesticide Factsheets

    Lumber pressure-treated with micronized copper was examined for the release of copper and copper micro/nanoparticles using a surface wipe method to simulate dermal transfer. In 2003, the wood industry began replacing CCA treated lumber products for residential use with copper based formulations. Micronized copper (nano to micron sized particles) has become the preferred treatment formulation. There is a lack of information on the release of copper, the fate of the particles during dermal contact, and the copper exposure level to children from hand-to-mouth transfer. For the current study, three treated lumber products, two micronized copper and one ionic copper, were purchased from commercial retailers. The boards were left to weather outdoors for approximately 1 year. Over the year time period, hand wipe samples were collected periodically to determine copper transfer from the wood surfaces. The two micronized formulations and the ionic formulation released similar levels of total copper. The amount of copper released was high initially, but decreased to a constant level (~1.5 mg m-2) after the first month of outdoor exposure. Copper particles were identified on the sampling cloths during the first two months of the experiment, after which the levels of copper were insufficient to collect interpretable data. After 1 month, the particles exhibited minimal changes in shape and size. At the end of 2-months, significant deterioration of the particles was

  6. Effects of Calibration Sample Size and Item Bank Size on Ability Estimation in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Sahin, Alper; Weiss, David J.

    2015-01-01

    This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…

  7. The significance of size change of soft tissue sarcoma during preoperative radiotherapy.

    PubMed

    Miki, Y; Ngan, S; Clark, J C M; Akiyama, T; Choong, P F M

    2010-07-01

    To assess the significance of change in tumour size during preoperative radiotherapy in patients with soft tissue sarcoma (STS). A retrospective review of 91 cases with STS was performed. Inclusion criteria were localised extremity and truncal STS with measurable disease, older than 18 years, treated with preoperative radiotherapy and wide local excision, in the period between January 1966 and December 2005. Patients with head and neck STS, or who received neoadjuvant chemotherapy were excluded. A difference in excess of 10% of the greatest tumour diameter of the pre-radiotherapy and the post-radiotherapy MRI scans was considered as change in tumour size. Increase in tumour size was noted in 28 patients (31%) (Group 1). No change or decrease in size was observed in 63 patients (Group 2). There were no significance differences in local control or overall survival rates between the 2 groups. The estimated overall actuarial local recurrence free, event-free and overall survival rates were 90.5%, 64.4%, 62.9% in Group 1, and 85.7%, 60.8%, 68.9% in Group 2 respectively. Increase in tumour size during preoperative radiotherapy for soft tissue sarcoma does not seem to associate with inferior local tumour control or compromise survival. Lack of reduction in tumour size is not necessarily a sign of lack of response to preoperative radiotherapy.

  8. Sample size calculations for case-control studies

    Cancer.gov

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  9. Sequential sampling: a novel method in farm animal welfare assessment.

    PubMed

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.

  10. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    PubMed

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  11. Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed

    NASA Astrophysics Data System (ADS)

    Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi

    2010-05-01

    To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.

  12. Barriers to Research Activities from the Perspective of the Students of Isfahan University of Medical Sciences.

    PubMed

    Ashrafi-Rizi, Hasan; Fateme, Zarmehr; Khorasgani, Zahra Ghazavi; Kazempour, Zahra; Imani, Sona Taebi

    2015-06-01

    Necessity to establish a coherent and targeted research context in order to development of any country is increasingly important. But the basic step in creating an effective research context would be enrichment motivation of researchers especially students and resolve barriers of research. Therefore, the purpose of this study was to determine barriers of research activities from the perspective of students of Isfahan University of Medical Sciences. This is research. Data was collected with author made questionnaire. The study sample consisted of students from Isfahan medical university and sample size based on Krejcie and Morgan table was 357. Sampling was Stratified Random. The validity of questionnaire confirmed by Library and information professionals and reliability based on Cronbach's Alpha was 0.933, respectively. The type of descriptive statistics was (percentage, frequency and mean) and inferential statistics (T-test, ANOVA, one-Sample Statistics) and SPSS software was used. Results showed that the mean of barriers to research activities among students of Isfahan University of Medical Sciences was 3.89 ± 0. 483. The highest mean was related to density of students' curriculum (4.22± 0.968) and lowest mean related to lack of access to appropriate library resources. Also, the mean of research activities 's barriers, according to aspects showed that the mean in individual barriers level (4.06±0.635) was more than other aspects: social and cultural aspects (4.01± 0.661), economical aspect (4.04± 0.787) and organizational barriers (3.78±0.503). The lowest mean was related to organizational barriers. Also there is no difference between mean of research activities' barriers of student of Isfahan University of Medical Sciences with regarded of gender, level of education and college. According to results of this research, although, the main barriers between students was individual barriers such as: lack of sufficient familiarity with research methods, insufficient experience in research and lack of familiarity with the terms of the articles in publication, but other aspects like economic, cultural, social and organizational was in bad condition too. Therefore it is suggested that workshops related to research methodologies is executed, like proposal writing, writing articles in university especially for students and administrators support student's research activities, effectively.

  13. Imaging the Hydrogen Absorption Dynamics of Individual Grains in Polycrystalline Palladium Thin Films in 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yau, Allison; Harder, Ross J.; Kanan, Matthew W.

    Defects such as dislocations and grain boundaries often control the properties of polycrystalline materials. In nanocrystalline materials, investigating this structure-function relationship while preserving the sample remains challenging because of the short length scales and buried interfaces involved. Here we use Bragg coherent diffractive imaging to investigate the role of structural inhomogeneity on the hydriding phase transformation dynamics of individual Pd grains in polycrystalline films in three-dimensional detail. In contrast to previous reports on single- and polycrystalline nanoparticles, we observe no evidence of a hydrogen-rich surface layer and consequently no size dependence in the hydriding phase transformation pressure over a 125-325more » nm size range. We do observe interesting grain boundary dynamics, including reversible rotations of grain lattices while the material remains in the hydrogen-poor phase. The mobility of the grain boundaries, combined with the lack of a hydrogen-rich surface layer, suggests that the grain boundaries are acting as fast diffusion sites for the hydrogen atoms. Such hydrogen-enhanced plasticity in the hydrogen poor phase provides insight into the switch from the size-dependent behavior of single-crystal nanoparticles to the lower transformation pressures of polycrystalline materials and may play a role in hydrogen embrittlement.« less

  14. Location of Biomarkers and Reagents within Agarose Beads of a Programmable Bio-nano-chip

    PubMed Central

    Jokerst, Jesse V.; Chou, Jie; Camp, James P.; Wong, Jorge; Lennart, Alexis; Pollard, Amanda A.; Floriano, Pierre N.; Christodoulides, Nicolaos; Simmons, Glennon W.; Zhou, Yanjie; Ali, Mehnaaz F.

    2012-01-01

    The slow development of cost-effective medical microdevices with strong analytical performance characteristics is due to a lack of selective and efficient analyte capture and signaling. The recently developed programmable bio-nano-chip (PBNC) is a flexible detection device with analytical behavior rivaling established macroscopic methods. The PBNC system employs ≈300 μm-diameter bead sensors composed of agarose “nanonets” that populate a microelectromechanical support structure with integrated microfluidic elements. The beads are an efficient and selective protein-capture medium suitable for the analysis of complex fluid samples. Microscopy and computational studies probe the 3D interior of the beads. The relative contributions that the capture and detection of moieties, analyte size, and bead porosity make to signal distribution and intensity are reported. Agarose pore sizes ranging from 45 to 620 nm are examined and those near 140 nm provide optimal transport characteristics for rapid (<15 min) tests. The system exhibits efficient (99.5%) detection of bead-bound analyte along with low (≈2%) nonspecific immobilization of the detection probe for carcinoembryonic antigen assay. Furthermore, the role analyte dimensions play in signal distribution is explored, and enhanced methods for assay building that consider the unique features of biomarker size are offered. PMID:21290601

  15. Exploring effective sampling design for monitoring soil organic carbon in degraded Tibetan grasslands.

    PubMed

    Chang, Xiaofeng; Bao, Xiaoying; Wang, Shiping; Zhu, Xiaoxue; Luo, Caiyun; Zhang, Zhenhua; Wilkes, Andreas

    2016-05-15

    The effects of climate change and human activities on grassland degradation and soil carbon stocks have become a focus of both research and policy. However, lack of research on appropriate sampling design prevents accurate assessment of soil carbon stocks and stock changes at community and regional scales. Here, we conducted an intensive survey with 1196 sampling sites over an area of 190 km(2) of degraded alpine meadow. Compared to lightly degraded meadow, soil organic carbon (SOC) stocks in moderately, heavily and extremely degraded meadow were reduced by 11.0%, 13.5% and 17.9%, respectively. Our field survey sampling design was overly intensive to estimate SOC status with a tolerable uncertainty of 10%. Power analysis showed that the optimal sampling density to achieve the desired accuracy would be 2, 3, 5 and 7 sites per 10 km(2) for lightly, moderately, heavily and extremely degraded meadows, respectively. If a subsequent paired sampling design with the optimum sample size were performed, assuming stock change rates predicted by experimental and modeling results, we estimate that about 5-10 years would be necessary to detect expected trends in SOC in the top 20 cm soil layer. Our results highlight the utility of conducting preliminary surveys to estimate the appropriate sampling density and avoid wasting resources due to over-sampling, and to estimate the sampling interval required to detect an expected sequestration rate. Future studies will be needed to evaluate spatial and temporal patterns of SOC variability. Copyright © 2016. Published by Elsevier Ltd.

  16. IMPACTS OF PATCH SIZE AND LAND COVER HETEROGENEITY ON THEMATIC IMAGE CLASSIFICATION ACCURACY

    EPA Science Inventory


    Landscape characteristics such as small patch size and land cover heterogeneity have been hypothesized to increase the likelihood of miss-classifying pixels during thematic image classification. However, there has been a lack of empirical evidence to support these hypotheses,...

  17. A qualitative study of psychological, social and behavioral barriers to appropriate food portion size control

    PubMed Central

    2013-01-01

    Background Given the worldwide prevalence of overweight and obesity, there is a clear need for meaningful practical healthy eating advice - not only in relation to food choice, but also on appropriate food portion sizes. As the majority of portion size research to date has been overwhelmingly quantitative in design, there is a clear need to qualitatively explore consumers’ views in order to fully understand how food portion size decisions are made. Using qualitative methodology this present study aimed to explore consumers’ views about factors influencing their portion size selection and consumption and to identify barriers to appropriate portion size control. Methods Ten focus groups with four to nine participants in each were formed with a total of 66 persons (aged 19–64 years) living on the island of Ireland. The semi-structured discussions elicited participants’ perceptions of suggested serving size guidance and explored the influence of personal, social and environmental factors on their food portion size consumption. Audiotapes of the discussions were professionally transcribed verbatim, loaded into NVivo 9, and analysed using an inductive thematic analysis procedure. Results The rich descriptive data derived from participants highlight that unhealthy portion size behaviors emanate from various psychological, social and behavioral factors. These bypass reflective and deliberative control, and converge to constitute significant barriers to healthy portion size control. Seven significant barriers to healthy portion size control were apparent: (1) lack of clarity and irrelevance of suggested serving size guidance; (2) guiltless eating; (3) lack of self-control over food cues; (4) distracted eating; (5) social pressures; (6) emotional eating rewards; and (7) quantification habits ingrained from childhood. Conclusions Portion size control strategies should empower consumers to overcome these effects so that the consumption of appropriate food portion sizes becomes automatic and habitual. PMID:23915381

  18. A qualitative study of psychological, social and behavioral barriers to appropriate food portion size control.

    PubMed

    Spence, Michelle; Livingstone, M Barbara E; Hollywood, Lynsey E; Gibney, Eileen R; O'Brien, Sinéad A; Pourshahidi, L Kirsty; Dean, Moira

    2013-08-01

    Given the worldwide prevalence of overweight and obesity, there is a clear need for meaningful practical healthy eating advice - not only in relation to food choice, but also on appropriate food portion sizes. As the majority of portion size research to date has been overwhelmingly quantitative in design, there is a clear need to qualitatively explore consumers' views in order to fully understand how food portion size decisions are made. Using qualitative methodology this present study aimed to explore consumers' views about factors influencing their portion size selection and consumption and to identify barriers to appropriate portion size control. Ten focus groups with four to nine participants in each were formed with a total of 66 persons (aged 19-64 years) living on the island of Ireland. The semi-structured discussions elicited participants' perceptions of suggested serving size guidance and explored the influence of personal, social and environmental factors on their food portion size consumption. Audiotapes of the discussions were professionally transcribed verbatim, loaded into NVivo 9, and analysed using an inductive thematic analysis procedure. The rich descriptive data derived from participants highlight that unhealthy portion size behaviors emanate from various psychological, social and behavioral factors. These bypass reflective and deliberative control, and converge to constitute significant barriers to healthy portion size control. Seven significant barriers to healthy portion size control were apparent: (1) lack of clarity and irrelevance of suggested serving size guidance; (2) guiltless eating; (3) lack of self-control over food cues; (4) distracted eating; (5) social pressures; (6) emotional eating rewards; and (7) quantification habits ingrained from childhood. Portion size control strategies should empower consumers to overcome these effects so that the consumption of appropriate food portion sizes becomes automatic and habitual.

  19. Synthesis and Characterization of Polyol-Assisted Nano Cu0.2Ni0.2Sn0.2Ba0.4 Fe2O4 by a Wet Hydroxyl Route

    NASA Astrophysics Data System (ADS)

    Pavithradevi, S.; Suriyanarayanan, N.; Boobalan, T.; Velumani, S.; Chandramohan, M.; Manivel Raja, M.

    2017-08-01

    Nanocrystalline spinel ferrite of composition Cu0.2Ni0.2Sn0.2Ba0.4 Fe2O4 has been synthesized by a wet hydroxyl chemical route in ethylene glycol as chelating agent and sodium hydroxide as precipitator at pH 8. Ethylene glycol has been used as the medium which serves as the solvent as well as a complexing agent. The synthesized particles are annealed at temperatures of 350°C, 700°C, and 1050°C. Thermogravimetric (TG) analysis confirms that at 240°C, ethylene glycol has evaporated completely, and a stable phase is formed above 670°C. Fourier transform infrared (FT-IR) spectroscopy of mixed Cu0.2Ni0.2Sn0.2Ba0.4 ferrite nanoparticles like as synthesized and annealed at 1050°C are recorded between 400 cm-1 and 4000 cm-1. FT-IR appraises the structural formation of Cu0.2Ni0.2Sn0.2Ba0.4 Fe2O4 between the as-synthesized sample and the sample annealed at 1050°C. Structural characterizations of all the samples are carried out by x-ray diffraction (XRD) technique. XRD reveals that the particle size increases with the increase in annealing temperatures. Transmission electron microscopy (TEM) and scanning electron microscopy (SEM) confirms that the particles are flaky and spherical with the crystallite size in the range of 11-27 nm. The decrement of dielectric properties, like dielectric constant and dielectric loss, with the increment of frequency as seen in all the samples is an usual dielectric behavior of spinel ferrites. The lack of net magnetization is noticed immediately when the applied magnetic field is removed which prompts superparamagnetic behavior, as seen in all the samples.

  20. Sample size and power calculations for detecting changes in malaria transmission using antibody seroconversion rate.

    PubMed

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris

    2015-12-30

    Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.

  1. Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology

    PubMed Central

    Vavrek, Matthew J.

    2015-01-01

    Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770

  2. High-resolution non-destructive three-dimensional imaging of integrated circuits

    NASA Astrophysics Data System (ADS)

    Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H. R.; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel

    2017-03-01

    Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography—a high-resolution coherent diffractive imaging technique—can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.

  3. The potential influence of morphology on the evolutionary divergence of an acoustic signal

    PubMed Central

    Pitchers, W. R.; Klingenberg, C.P.; Tregenza, Tom; Hunt, J.; Dworkin, I.

    2014-01-01

    The evolution of acoustic behaviour and that of the morphological traits mediating its production are often coupled. Lack of variation in the underlying morphology of signalling traits has the potential to constrain signal evolution. This relationship is particularly likely in field crickets, where males produce acoustic advertisement signals to attract females by stridulating with specialized structures on their forewings. In this study, we characterise the size and geometric shape of the forewings of males from six allopatric populations of the black field cricket (Teleogryllus commodus) known to have divergent advertisement calls. We sample from each of these populations using both wild-caught and common-garden reared cohorts, allowing us to test for multivariate relationships between wing morphology and call structure. We show that the allometry of shape has diverged across populations. However, there was a surprisingly small amount of covariation between wing shape and call structure within populations. Given the importance of male size for sexual selection in crickets, the divergence we observe among populations has the potential to influence the evolution of advertisement calls in this species. PMID:25223712

  4. Gas adsorption and capillary condensation in nanoporous alumina films.

    PubMed

    Casanova, Fèlix; Chiang, Casey E; Li, Chang-Peng; Roshchin, Igor V; Ruminski, Anne M; Sailor, Michael J; Schuller, Ivan K

    2008-08-06

    Gas adsorption and capillary condensation of organic vapors are studied by optical interferometry, using anodized nanoporous alumina films with controlled geometry (cylindrical pores with diameters in the range of 10-60 nm). The optical response of the film is optimized with respect to the geometric parameters of the pores, for potential performance as a gas sensor device. The average thickness of the adsorbed film at low relative pressures is not affected by the pore size. Capillary evaporation of the liquid from the nanopores occurs at the liquid-vapor equilibrium described by the classical Kelvin equation with a hemispherical meniscus. Due to the almost complete wetting, we can quantitatively describe the condensation for isopropanol using the Cohan model with a cylindrical meniscus in the Kelvin equation. This model describes the observed hysteresis and allows us to use the adsorption branch of the isotherm to calculate the pore size distribution of the sample in good agreement with independent structural measurements. The condensation for toluene lacks reproducibility due to incomplete surface wetting. This exemplifies the relevant role of the fluid-solid (van der Waals) interactions in the hysteretic behavior of capillary condensation.

  5. A global synthesis of animal phenological responses to climate change

    NASA Astrophysics Data System (ADS)

    Cohen, Jeremy M.; Lajeunesse, Marc J.; Rohr, Jason R.

    2018-03-01

    Shifts in phenology are already resulting in disruptions to the timing of migration and breeding, and asynchronies between interacting species1-5. Recent syntheses have concluded that trophic level1, latitude6 and how phenological responses are measured7 are key to determining the strength of phenological responses to climate change. However, researchers still lack a comprehensive framework that can predict responses to climate change globally and across diverse taxa. Here, we synthesize hundreds of published time series of animal phenology from across the planet to show that temperature primarily drives phenological responses at mid-latitudes, with precipitation becoming important at lower latitudes, probably reflecting factors that drive seasonality in each region. Phylogeny and body size are associated with the strength of phenological shifts, suggesting emerging asynchronies between interacting species that differ in body size, such as hosts and parasites and predators and prey. Finally, although there are many compelling biological explanations for spring phenological delays, some examples of delays are associated with short annual records that are prone to sampling error. Our findings arm biologists with predictions concerning which climatic variables and organismal traits drive phenological shifts.

  6. High-resolution non-destructive three-dimensional imaging of integrated circuits.

    PubMed

    Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H R; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel

    2017-03-15

    Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography-a high-resolution coherent diffractive imaging technique-can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.

  7. A few good reasons why species-area relationships do not work for parasites.

    PubMed

    Strona, Giovanni; Fattorini, Simone

    2014-01-01

    Several studies failed to find strong relationships between the biological and ecological features of a host and the number of parasite species it harbours. In particular, host body size and geographical range are generally only weak predictors of parasite species richness, especially when host phylogeny and sampling effort are taken into account. These results, however, have been recently challenged by a meta-analytic study that suggested a prominent role of host body size and range extent in determining parasite species richness (species-area relationships). Here we argue that, in general, results from meta-analyses should not discourage researchers from investigating the reasons for the lack of clear patterns, thus proposing a few tentative explanations to the fact that species-area relationships are infrequent or at least difficult to be detected in most host-parasite systems. The peculiar structure of host-parasite networks, the enemy release hypothesis, the possible discrepancy between host and parasite ranges, and the evolutionary tendency of parasites towards specialization may explain why the observed patterns often do not fit those predicted by species-area relationships.

  8. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    PubMed

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.

  9. Biostatistics Series Module 5: Determining Sample Size

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437

  10. Sample size and power for cost-effectiveness analysis (part 1).

    PubMed

    Glick, Henry A

    2011-03-01

    Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.

  11. Estimation of sample size and testing power (Part 4).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  12. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    PubMed

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  13. An Effective Approach to Biomedical Information Extraction with Limited Training Data

    ERIC Educational Resources Information Center

    Jonnalagadda, Siddhartha

    2011-01-01

    In the current millennium, extensive use of computers and the internet caused an exponential increase in information. Few research areas are as important as information extraction, which primarily involves extracting concepts and the relations between them from free text. Limitations in the size of training data, lack of lexicons and lack of…

  14. 77 FR 9916 - California State Motor Vehicle and Nonroad Engine Pollution Control Standards; Mobile Cargo...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-21

    ... safety factors (including the potential increased risk of burn or fire) associated with compliance with... that improper sizing of VDECS with engines may be occurring. This coupled with a lack of concrete... available, etc.). Based on the lack of concrete evidence from the commenters that it has incurred...

  15. A sequential bioequivalence design with a potential ethical advantage.

    PubMed

    Fuglsang, Anders

    2014-07-01

    This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.

  16. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  17. Illusory Changes in Body Size Modulate Body Satisfaction in a Way That Is Related to Non-Clinical Eating Disorder Psychopathology

    PubMed Central

    Preston, Catherine; Ehrsson, H. Henrik

    2014-01-01

    Historically, body size overestimation has been linked to abnormal levels of body dissatisfaction found in eating disorders. However, recently this relationship has been called into question. Indeed, despite a link between how we perceive and how we feel about our body seeming intuitive, until now lack of an experimental method to manipulate body size has meant that a causal link, even in healthy participants, has remained elusive. Recent developments in body perception research demonstrate that the perceptual experience of the body can be readily manipulated using multisensory illusions. The current study exploits such illusions to modulate perceived body size in an attempt to influence body satisfaction. Participants were presented with stereoscopic video images of slimmer and wider mannequin bodies viewed through head-mounted displays from first person perspective. Illusory ownership was induced by synchronously stroking the seen mannequin body with the unseen real body. Pre and post-illusion affective and perceptual measures captured changes in perceived body size and body satisfaction. Illusory ownership of a slimmer body resulted in participants perceiving their actual body as slimmer and giving higher ratings of body satisfaction demonstrating a direct link between perceptual and affective body representations. Change in body satisfaction following illusory ownership of a wider body, however, was related to degree of (non-clinical) eating disorder psychopathology, which can be linked to fluctuating body representations found in clinical samples. The results suggest that body perception is linked to body satisfaction and may be of importance for eating disorder symptomology. PMID:24465698

  18. Relative tooth size at birth in primates: Life history correlates.

    PubMed

    Smith, Timothy D; Muchlinski, Magdalena N; Bucher, Wade R; Vinyard, Christopher J; Bonar, Christopher J; Evans, Sian; Williams, Lawrence E; DeLeon, Valerie B

    2017-11-01

    Dental eruption schedules have been closely linked to life history variables. Here we examine a sample of 50 perinatal primates (28 species) to determine whether life history traits correlate with relative tooth size at birth. Newborn primates were studied using serial histological sectioning. Volumes of deciduous premolars (dp 2 -dp 4 ), replacement teeth (if any), and permanent molars (M 1-2/3 ) of the upper jaw were measured and residuals from cranial length were calculated with least squares regressions to obtain relative dental volumes (RDVs). Relative dental volumes of deciduous or permanent teeth have an unclear relationship with relative neonatal mass in all primates. Relative palatal length (RPL), used as a proxy for midfacial size, is significantly, positively correlated with larger deciduous and permanent postcanine teeth. However, when strepsirrhines alone are examined, larger RPL is correlated with smaller RDV of permanent teeth. In the full sample, RDVs of deciduous premolars are significantly negatively correlated with relative gestation length (RGL), but have no clear relationship with relative weaning age. RDVs of molars lack a clear relationship with RGL; later weaning is associated with larger molar RDV, although correlations are not significant. When strepsirrhines alone are analyzed, clearer trends are present: longer gestations or later weaning are associated with smaller deciduous and larger permanent postcanine teeth (only gestational length correlations are significant). Our results indicate a broad trend that primates with the shortest RGLs precociously develop deciduous teeth; in strepsirrhines, the opposite trend is seen for permanent molars. Anthropoids delay growth of permanent teeth, while strepsirrhines with short RGLs are growing replacement teeth concurrently. A comparison of neonatal volumes with existing information on extent of cusp mineralization indicates that growth of tooth germs and cusp mineralization may be selected for independently. © 2017 Wiley Periodicals, Inc.

  19. The cost of large numbers of hypothesis tests on power, effect size and sample size.

    PubMed

    Lazzeroni, L C; Ray, A

    2012-01-01

    Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.

  20. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    PubMed

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  1. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  2. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  3. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  4. The role of perceived barriers and objectively measured physical activity in adults aged 65-100.

    PubMed

    Gellert, Paul; Witham, Miles D; Crombie, Iain K; Donnan, Peter T; McMurdo, Marion E T; Sniehotta, Falko F

    2015-05-01

    to test the predictive utility of perceived barriers to objectively measured physical activity levels in a stratified sample of older adults when accounting for social-cognitive determinants proposed by the Theory of Planned Behaviour (TPB), and economic and demographic factors. data were analysed from the Physical Activity Cohort Scotland survey, a representative and stratified (65-80 and 80+ years; deprived and affluent) sample of 584 community-dwelling older people, resident in Tayside, Scotland. Physical activity was measured objectively by accelerometry. perceived barriers clustered around the areas of poor health, lack of interest, lack of safety and lack of access. Perceived poor health and lack of interest, but not lack of access or concerns about personal safety, predicted physical activity after controlling for demographic, economic and TPB variables. perceived person-related barriers (poor health and lack of interest) seem to be more strongly associated with physical activity levels than perceived environmental barriers (safety and access) in a large sample of older adults. Perceived barriers are modifiable and may be a target for future interventions. © The Author 2015. Published by Oxford University Press on behalf of the British Geriatrics Society. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  6. Grizzly bear density in Glacier National Park, Montana

    USGS Publications Warehouse

    Kendall, K.C.; Stetz, J.B.; Roon, David A.; Waits, L.P.; Boulanger, J.B.; Paetkau, David

    2008-01-01

    We present the first rigorous estimate of grizzly bear (Ursus arctos) population density and distribution in and around Glacier National Park (GNP), Montana, USA. We used genetic analysis to identify individual bears from hair samples collected via 2 concurrent sampling methods: 1) systematically distributed, baited, barbed-wire hair traps and 2) unbaited bear rub trees found along trails. We used Huggins closed mixture models in Program MARK to estimate total population size and developed a method to account for heterogeneity caused by unequal access to rub trees. We corrected our estimate for lack of geographic closure using a new method that utilizes information from radiocollared bears and the distribution of bears captured with DNA sampling. Adjusted for closure, the average number of grizzly bears in our study area was 240.7 (95% CI = 202–303) in 1998 and 240.6 (95% CI = 205–304) in 2000. Average grizzly bear density was 30 bears/1,000 km2, with 2.4 times more bears detected per hair trap inside than outside GNP. We provide baseline information important for managing one of the few remaining populations of grizzlies in the contiguous United States.

  7. Mineral particles content in recent snow at Summit (Greenland)

    NASA Astrophysics Data System (ADS)

    Drab, E.; Gaudichet, A.; Jaffrezo, J. L.; Colin, J. L.

    The mineral insoluble fraction of snowpit samples collected at Summit is investigated, representing deposition from summer 1987 to summer 1991. We attempt to describe the particles which are observed in the series, with very large seasonal variations. Elemental, mineralogical and size distribution studies are carried out on four samples selected according to the chemical profile of the snowpit (two samples from spring and two from winter) using X-ray fluorescence spectrometry and analytical transmission electron microscopy. Results indicate a large predominance of the soil-derived particles originating from arid or semi-arid regions of the Northern Hemisphere. The mineralogy clearly indicates a high contribution for the muscovite-illite associated with a low kaolinite/chlorite ratio, together with the rather lack of smectite. This supports the hypothesis of an Asian source. Several other factors are consistent with this Asian source, like the recent climatology and the good timing between the Asian dust storms period and the peak of dust concentration in the ice. The mineralogy of the insoluble particles in the snow is similar between winter and spring, suggesting that the change of concentration between the seasons is more strongly linked to changes of atmospheric parameters than changes of the source regions.

  8. 76 FR 56141 - Notice of Intent To Request New Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-12

    ... level surveys of similar scope and size. The sample for each selected community will be strategically... of 2 hours per sample community. Full Study: The maximum sample size for the full study is 2,812... questionnaires. The initial sample size for this phase of the research is 100 respondents (10 respondents per...

  9. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    ERIC Educational Resources Information Center

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  10. When Can Clades Be Potentially Resolved with Morphology?

    PubMed Central

    Bapst, David W.

    2013-01-01

    Morphology-based phylogenetic analyses are the only option for reconstructing relationships among extinct lineages, but often find support for conflicting hypotheses of relationships. The resulting lack of phylogenetic resolution is generally explained in terms of data quality and methodological issues, such as character selection. A previous suggestion is that sampling ancestral morphotaxa or sampling multiple taxa descended from a long-lived, unchanging lineage can also yield clades which have no opportunity to share synapomorphies. This lack of character information leads to a lack of ‘intrinsic’ resolution, an issue that cannot be solved with additional morphological data. It is unclear how often we should expect clades to be intrinsically resolvable in realistic circumstances, as intrinsic resolution must increase as taxonomic sampling decreases. Using branching simulations, I quantify intrinsic resolution across several models of morphological differentiation and taxonomic sampling. Intrinsically unresolvable clades are found to be relatively frequent in simulations of both extinct and living taxa under realistic sampling scenarios, implying that intrinsic resolution is an issue for morphology-based analyses of phylogeny. Simulations which vary the rates of sampling and differentiation were tested for their agreement to observed distributions of durations from well-sampled fossil records and also having high intrinsic resolution. This combination only occurs in those datasets when differentiation and sampling rates are both unrealistically high relative to branching and extinction rates. Thus, the poor phylogenetic resolution occasionally observed in morphological phylogenetics may result from a lack of intrinsic resolvability within groups. PMID:23638034

  11. [Practical aspects regarding sample size in clinical research].

    PubMed

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  12. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    PubMed Central

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  13. Meta-analyzing dependent correlations: an SPSS macro and an R script.

    PubMed

    Cheung, Shu Fai; Chan, Darius K-S

    2014-06-01

    The presence of dependent correlation is a common problem in meta-analysis. Cheung and Chan (2004, 2008) have shown that samplewise-adjusted procedures perform better than the more commonly adopted simple within-sample mean procedures. However, samplewise-adjusted procedures have rarely been applied in meta-analytic reviews, probably due to the lack of suitable ready-to-use programs. In this article, we compare the samplewise-adjusted procedures with existing procedures to handle dependent effect sizes, and present the samplewise-adjusted procedures in a way that will make them more accessible to researchers conducting meta-analysis. We also introduce two tools, an SPSS macro and an R script, that researchers can apply to their meta-analyses; these tools are compatible with existing meta-analysis software packages.

  14. Accurate condensed history Monte Carlo simulation of electron transport. II. Application to ion chamber response simulations.

    PubMed

    Kawrakow, I

    2000-03-01

    In this report the condensed history Monte Carlo simulation of electron transport and its application to the calculation of ion chamber response is discussed. It is shown that the strong step-size dependencies and lack of convergence to the correct answer previously observed are the combined effect of the following artifacts caused by the EGS4/PRESTA implementation of the condensed history technique: dose underprediction due to PRESTA'S pathlength correction and lateral correlation algorithm; dose overprediction due to the boundary crossing algorithm; dose overprediction due to the breakdown of the fictitious cross section method for sampling distances between discrete interaction and the inaccurate evaluation of energy-dependent quantities. These artifacts are now understood quantitatively and analytical expressions for their effect are given.

  15. Reading and Language Disorders: The Importance of Both Quantity and Quality

    PubMed Central

    Newbury, Dianne F.; Monaco, Anthony P.; Paracchini, Silvia

    2014-01-01

    Reading and language disorders are common childhood conditions that often co-occur with each other and with other neurodevelopmental impairments. There is strong evidence that disorders, such as dyslexia and Specific Language Impairment (SLI), have a genetic basis, but we expect the contributing genetic factors to be complex in nature. To date, only a few genes have been implicated in these traits. Their functional characterization has provided novel insight into the biology of neurodevelopmental disorders. However, the lack of biological markers and clear diagnostic criteria have prevented the collection of the large sample sizes required for well-powered genome-wide screens. One of the main challenges of the field will be to combine careful clinical assessment with high throughput genetic technologies within multidisciplinary collaborations. PMID:24705331

  16. Sexual minorities and selection of a primary care physician in a midwestern U.S. city.

    PubMed

    Labig, Chalmer E; Peterson, Tim O

    2006-01-01

    How and why sexual minorities select a primary care physician is critical to the development of methods for attracting these clients to a physician's practice. Data obtained from a sample of sexual minorities in a mid-size city in our nation's heartland would indicate that these patients are loyal when the primary care physician has a positive attitude toward their sexual orientation. The data also confirms that most sexual minorities select same sex physicians but not necessarily same sexual orientation physicians because of lack of knowledge of physicians' sexual orientation. Family practice physicians and other primary care physicians can reach out to this population by encouraging word of mouth advertising and by displaying literature on health issues for all sexual orientations in their offices.

  17. Measurement of tree canopy architecture

    NASA Technical Reports Server (NTRS)

    Martens, S. N.; Ustin, S. L.; Norman, J. M.

    1991-01-01

    The lack of accurate extensive geometric data on tree canopies has retarded development and validation of radiative transfer models. A stratified sampling method was devised to measure the three-dimensional geometry of 16 walnut trees which had received irrigation treatments of either 100 or 33 per cent of evapotranspirational (ET) demand for the previous two years. Graphic reconstructions of the three-dimensional geometry were verified by 58 independent measurements. The distributions of stem- and leaf-size classes, lengths, and angle classes were determined and used to calculate leaf area index (LAI), stem area, and biomass. Reduced irrigation trees have lower biomass of stems, leaves and fruit, lower LAI, steeper leaf angles and altered biomass allocation to large stems. These data can be used in ecological models that link canopy processes with remotely sensed measurements.

  18. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    PubMed

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  19. Quantitative characterization of gold nanoparticles by size-exclusion and hydrodynamic chromatography, coupled to inductively coupled plasma mass spectrometry and quasi-elastic light scattering.

    PubMed

    Pitkänen, Leena; Montoro Bustos, Antonio R; Murphy, Karen E; Winchester, Michael R; Striegel, André M

    2017-08-18

    The physicochemical characterization of nanoparticles (NPs) is of paramount importance for tailoring and optimizing the properties of these materials as well as for evaluating the environmental fate and impact of the NPs. Characterizing the size and chemical identity of disperse NP sample populations can be accomplished by coupling size-based separation methods to physical and chemical detection methods. Informed decisions regarding the NPs can only be made, however, if the separations themselves are quantitative, i.e., if all or most of the analyte elutes from the column within the course of the experiment. We undertake here the size-exclusion chromatographic characterization of Au NPs spanning a six-fold range in mean size. The main problem which has plagued the size-exclusion chromatography (SEC) analysis of Au NPs, namely lack of quantitation accountability due to generally poor NP recovery from the columns, is overcome by carefully matching eluent formulation with the appropriate stationary phase chemistry, and by the use of on-line inductively coupled plasma mass spectrometry (ICP-MS) detection. Here, for the first time, we demonstrate the quantitative analysis of Au NPs by SEC/ICP-MS, including the analysis of a ternary NP blend. The SEC separations are contrasted to HDC/ICP-MS (HDC: hydrodynamic chromatography) separations employing the same stationary phase chemistry. Additionally, analysis of Au NPs by HDC with on-line quasi-elastic light scattering (QELS) allowed for continuous determination of NP size across the chromatographic profiles, circumventing issues related to the shedding of fines from the SEC columns. The use of chemically homogeneous reference materials with well-defined size range allowed for better assessment of the accuracy and precision of the analyses, and for a more direct interpretation of results, than would be possible employing less rigorously characterized analytes. Published by Elsevier B.V.

  20. A meta-analytic review of overgeneral memory: The role of trauma history, mood, and the presence of posttraumatic stress disorder.

    PubMed

    Ono, Miyuki; Devilly, Grant J; Shum, David H K

    2016-03-01

    A number of studies suggest that a history of trauma, depression, and posttraumatic stress disorder (PTSD) are associated with autobiographical memory deficits, notably overgeneral memory (OGM). However, whether there are any group differences in the nature and magnitude of OGM has not been evaluated. Thus, a meta-analysis was conducted to quantify group differences in OGM. The effect sizes were pooled from studies examining the effect on OGM from a history of trauma (e.g., childhood sexual abuse), and the presence of PTSD or current depression (e.g., major depressive disorder). Using multiple search engines, 13 trauma studies and 12 depression studies were included in this review. A depression effect was observed on OGM with a large effect size, and was more evident by the lack of specific memories, especially to positive cues. An effect of trauma history on OGM was observed with a medium effect size, and this was most evident by the presence of overgeneral responses to negative cues. The results also suggested an amplified memory deficit in the presence of PTSD. That is, the effect sizes of OGM among individuals with PTSD were very large and relatively equal across different types of OGM. Future studies that directly compare the differences of OGM among 4 samples (i.e., controls, current depression without trauma history, trauma history without depression, and trauma history and depression) would be warranted to verify the current findings. (c) 2016 APA, all rights reserved).

  1. Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests

    Treesearch

    Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford

    1995-01-01

    To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...

  2. Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods

    DTIC Science & Technology

    2016-11-01

    ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample

  3. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  4. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach

    NASA Technical Reports Server (NTRS)

    Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.

  5. Electrical and magnetic properties of nano-sized magnesium ferrite

    NASA Astrophysics Data System (ADS)

    T, Smitha; X, Sheena; J, Binu P.; Mohammed, E. M.

    2015-02-01

    Nano-sized magnesium ferrite was synthesized using sol-gel techniques. Structural characterization was done using X-ray diffractometer and Fourier Transform Infrared Spectrometer. Vibration Sample Magnetometer was used to record the magnetic measurements. XRD analysis reveals the prepared sample is single phasic without any impurity. Particle size calculation shows the average crystallite size of the sample is 19nm. FTIR analysis confirmed spinel structure of the prepared samples. Magnetic measurement study shows that the sample is ferromagnetic with high degree of isotropy. Hysterisis loop was traced at temperatures 100K and 300K. DC electrical resistivity measurements show semiconducting nature of the sample.

  6. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  7. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    PubMed

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  8. Three potato centromeres are associated with distinct haplotypes with or without megabase-sized satellite repeat arrays.

    PubMed

    Wang, Linsheng; Zeng, Zixian; Zhang, Wenli; Jiang, Jiming

    2014-02-01

    We report discoveries of different haplotypes associated with the centromeres of three potato chromosomes, including haplotypes composed of long arrays of satellite repeats and haplotypes lacking the same repeats. These results are in favor of the hypothesis that satellite repeat-based centromeres may originate from neocentromeres that lack repeats.

  9. Enhancing Student Motivation as Evidenced by Improved Academic Growth and Increased Work Completion.

    ERIC Educational Resources Information Center

    Belcher, Gay; Macari, Nancy

    This project evaluated a program for enhancing student motivation as evidenced by improved academic growth and increased work completion. The targeted population consisted of fifth graders in a small school in a medium-sized rural community in the Midwest. The problem of lack of achievement motivation and lack of student concern about academic…

  10. Private Pre-University Education in Romania: Mixing Control with Lack of Strategy

    ERIC Educational Resources Information Center

    Stanus, Cristina

    2014-01-01

    This paper approaches private provision of pre-university education in Romania, exploring available data on the sector's size and main characteristics and evaluating the extent to which the current regulatory framework enables positive effects in terms of freedom of choice, quality, equity, and social cohesion. The paper argues that the lack of a…

  11. The Critical Importance of Data Reduction Calibrations in the Interpretability of S-type Asteroid Spectra

    NASA Technical Reports Server (NTRS)

    Gaffey, Michael J.

    2005-01-01

    There is significant dispute concerning the interpretation and meteoritic affinities of S-type asteroids. Some of this arises from the use of inappropriate analysis methods and the derivation of conclusions which cannot be supported by those interpretive methodologies [1]. The most frequently applied inappropriate technique is curve matching. Whether matching spectra from a spectral library or mixing end-member spectra to match the asteroid spectrum, curve matching for S-type spectra suffers from a suite of weaknesses that are virtually impossible to overcome. Chief among these is the lack of a comprehensive comparison set. Lacking a complete library that includes both the mineralogical variations and the spectrally significant physical variations (e.g., particle size, petrographic relationships, etc.), curve matches are plagued with potential unresolved ambiguities. The other major weakness of virtually all curve matching efforts is that equal weight is given to matching all portions of the spectrum. In actuality, some portions of the spectrum (e.g., centers of absorption features) must be matched very accurately while other portions of the spectrum (e.g., continuum regions and overall slopes) do not require good matches since they are strongly effected by parameters unrelated to the mineralogy of the sample.

  12. Diversity and Spatiotemporal Distribution of Larval Odonate Assemblages in Temperate Neotropical Farm Ponds

    PubMed Central

    Pires, Mateus Marques; Kotzian, Carla Bender; Spies, Marcia Regina

    2014-01-01

    Abstract Farm ponds help maintain diversity in altered landscapes. However, studies on the features that drive this type of property in the Neotropics are still lacking, especially for the insect fauna. We analyzed the spatial and temporal distribution of odonate larval assemblages in farm ponds. Odonates were sampled monthly at four farm ponds from March 2008 to February 2009 in a temperate montane region of southern Brazil. A small number of genera were frequent and accounted for most of the dominant fauna. The dominant genera composition differed among ponds. Local spatial drivers such as area, hydroperiod, and margin vegetation structure likely explain these results more than spatial predictors due to the small size of the study area. Circular analysis detected seasonal effect on assemblage abundance but not on richness. Seasonality in abundance was related to the life cycles of a few dominant genera. This result was explained by temperature and not rainfall due to the temperate climate of the region studied. The persistence of dominant genera and the sparse occurrence of many taxa over time probably led to a lack in a seasonal pattern in assemblage richness. PMID:25527585

  13. Use of national clinical databases for informing and for evaluating health care policies.

    PubMed

    Black, Nick; Tan, Stefanie

    2013-02-01

    Policy-makers and analysts could make use of national clinical databases either to inform or to evaluate meso-level (organisation and delivery of health care) and macro-level (national) policies. Reviewing the use of 15 of the best established databases in England, we identify and describe four published examples of each use. These show that policy-makers can either make use of the data itself or of research based on the database. For evaluating policies, the major advantages are the huge sample sizes available, the generalisability of the data, its immediate availability and historic information. The principal methodological challenges involve the need for risk adjustment and time-series analysis. Given their usefulness in the policy arena, there are several reasons why national clinical databases have not been used more, some due to a lack of 'push' by their custodians and some to the lack of 'pull' by policy-makers. Greater exploitation of these valuable resources would be facilitated by policy-makers' and custodians' increased awareness, minimisation of legal restrictions on data use, improvements in the quality of databases and a library of examples of applications to policy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Lack of gender effects on gray matter volumes in adolescent generalized anxiety disorder.

    PubMed

    Liao, Mei; Yang, Fan; Zhang, Yan; He, Zhong; Su, Linyan; Li, Lingjiang

    2014-02-01

    Previous epidemiological and clinical studies have reported gender differences in prevalence and clinical features of generalized anxiety disorder (GAD). Such gender differences in clinical phenomenology suggest that the underlying neural circuitry of GAD could also be different in males and females. This study aimed to explore the possible gender effect on gray matter volumes in adolescents with GAD. Twenty-six adolescent GAD patients and 25 healthy controls participated and underwent high-resolution structural magnetic resonance scans. Voxel-based morphometry (VBM) was used to investigate gray matter alterations. Our study revealed a significant diagnosis main effect in the right putamen, with larger gray matter volumes in GAD patients compared to healthy controls, and a significant gender main effect in the left precuneus/posterior cingulate cortex, with larger gray matter volumes in males compared to females. No gender-by-diagnosis interaction effect was found in this study. The relatively small sample size in this study might result in a lack of power to demonstrate gender effects on brain structure in GAD. The results suggested that there are differences in gray matter volumes between males and females, but gray matter volumes in GAD are not influenced by gender. © 2013 Published by Elsevier B.V.

  15. Synchrotron-Based X-ray Microtomography Characterization of the Effect of Processing Variables on Porosity Formation in Laser Power-Bed Additive Manufacturing of Ti-6Al-4V

    NASA Astrophysics Data System (ADS)

    Cunningham, Ross; Narra, Sneha P.; Montgomery, Colt; Beuth, Jack; Rollett, A. D.

    2017-03-01

    The porosity observed in additively manufactured (AM) parts is a potential concern for components intended to undergo high-cycle fatigue without post-processing to remove such defects. The morphology of pores can help identify their cause: irregularly shaped lack of fusion or key-holing pores can usually be linked to incorrect processing parameters, while spherical pores suggest trapped gas. Synchrotron-based x-ray microtomography was performed on laser powder-bed AM Ti-6Al-4V samples over a range of processing conditions to investigate the effects of processing parameters on porosity. The process mapping technique was used to control melt pool size. Tomography was also performed on the powder to measure porosity within the powder that may transfer to the parts. As observed previously in experiments with electron beam powder-bed fabrication, significant variations in porosity were found as a function of the processing parameters. A clear connection between processing parameters and resulting porosity formation mechanism was observed in that inadequate melt pool overlap resulted in lack-of-fusion pores whereas excess power density produced keyhole pores.

  16. Effects of inbreeding on potential and realized immune responses in Tenebrio molitor.

    PubMed

    Rantala, Markus J; Viitaniemi, Heidi; Roff, Derek A

    2011-06-01

    Although numerous studies on vertebrates suggest that inbreeding reduces their resistance against parasites and pathogens, studies in insects have found contradictory evidence. In this study we tested the effect of 1 generation of brother-sister mating (inbreeding) on potential and realized immune responses and other life-history traits in Tenebrio molitor. We found that inbreeding reduced adult mass, pre-adult survival and increased development time, suggesting that inbreeding reduced the condition of the adults and thus potentially made them more susceptible to physiological stress. However, we found no significant effect of inbreeding on the potential immune response (encapsulation response), but inbreeding reduced the realized immune response (resistance against the entomopathogenic fungi, Beauveria bassiana). There was a significant family effect on encapsulation response, but no family effect on the resistance against the entomopathogenic fungi. Given that this latter trait showed significant inbreeding depression and that the sample size for the family-effect analysis was small it is likely that the lack of a significant family effect is due to reduced statistical power, rather than the lack of a heritable basis to the trait. Our study highlights the importance of using pathogens and parasites in immunoecological studies.

  17. Reexamining Sample Size Requirements for Multivariate, Abundance-Based Community Research: When Resources are Limited, the Research Does Not Have to Be.

    PubMed

    Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F

    2015-01-01

    Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.

  18. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    NASA Astrophysics Data System (ADS)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  19. Body size evolution in an old insect order: No evidence for Cope's Rule in spite of fitness benefits of large size.

    PubMed

    Waller, John T; Svensson, Erik I

    2017-09-01

    We integrate field data and phylogenetic comparative analyses to investigate causes of body size evolution and stasis in an old insect order: odonates ("dragonflies and damselflies"). Fossil evidence for "Cope's Rule" in odonates is weak or nonexistent since the last major extinction event 65 million years ago, yet selection studies show consistent positive selection for increased body size among adults. In particular, we find that large males in natural populations of the banded demoiselle (Calopteryx splendens) over several generations have consistent fitness benefits both in terms of survival and mating success. Additionally, there was no evidence for stabilizing or conflicting selection between fitness components within the adult life-stage. This lack of stabilizing selection during the adult life-stage was independently supported by a literature survey on different male and female fitness components from several odonate species. We did detect several significant body size shifts among extant taxa using comparative methods and a large new molecular phylogeny for odonates. We suggest that the lack of Cope's rule in odonates results from conflicting selection between fitness advantages of large adult size and costs of long larval development. We also discuss competing explanations for body size stasis in this insect group. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  20. A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models

    ERIC Educational Resources Information Center

    Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.

    2013-01-01

    Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…

  1. Regulation of radial glial survival by signals from the meninges

    PubMed Central

    Radakovits, Randor; Barros, Claudia S.; Belvindrah, Richard; Patton, Bruce; Müller, Ulrich

    2009-01-01

    Summary Radial glial cells (RGCs) in the developing cerebral cortex are progenitors for neurons and glia and their processes serve as guideposts for migrating neurons. So far, it has remained unclear whether RGC processes also control the function of RGCs more directly. Here we show that RGC numbers and cortical size are reduced in mice lacking β1 integrins in RGCs. TUNEL stainings and time-lapse video recordings demonstrate that β1-deficient RGCs processes detach from the meningeal BM followed by apoptotic death of RGCs. Apoptosis is also induced by surgical removal of the meninges. Finally, mice lacking the BM components laminin α2 and α4 show defects in the attachment of RGC processes at the meninges, a reduction in cortical size, and enhanced apoptosis of RGC cells. Our findings demonstrate that attachment of RGC processes at the meninges is important for RGC survival and the control of cortical size. PMID:19535581

  2. Regulation of radial glial survival by signals from the meninges.

    PubMed

    Radakovits, Randor; Barros, Claudia S; Belvindrah, Richard; Patton, Bruce; Müller, Ulrich

    2009-06-17

    Radial glial cells (RGCs) in the developing cerebral cortex are progenitors for neurons and glia, and their processes serve as guideposts for migrating neurons. So far, it has remained unclear whether RGC processes also control the function of RGCs more directly. Here, we show that RGC numbers and cortical size are reduced in mice lacking beta1 integrins in RGCs. TUNEL stainings and time-lapse video recordings demonstrate that beta1-deficient RGCs processes detach from the meningeal basement membrane (BM) followed by apoptotic death of RGCs. Apoptosis is also induced by surgical removal of the meninges. Finally, mice lacking the BM components laminin alpha2 and alpha4 show defects in the attachment of RGC processes at the meninges, a reduction in cortical size, and enhanced apoptosis of RGC cells. Our findings demonstrate that attachment of RGC processes at the meninges is important for RGC survival and the control of cortical size.

  3. Life-history variation of a neotropical thrush challenges food limitation theory

    USGS Publications Warehouse

    Ferretti, V.; Llambias, P.E.; Martin, T.E.

    2005-01-01

    Since David Lack first proposed that birds rear as many young as they can nourish, food limitation has been accepted as the primary explanation for variation in clutch size and other life-history traits in birds. The importance of food limitation in life-history variation, however, was recently questioned on theoretical grounds. Here, we show that clutch size differences between two populations of a neotropical thrush were contrary to expectations under Lack's food limitation hypothesis. Larger clutch sizes were found in a population with higher nestling starvation rate (i.e. greater food limitation). We experimentally equalized clutches between populations to verify this difference in food limitation. Our experiment confirmed greater food limitation in the population with larger mean clutch size. In addition, incubation bout length and nestling growth rate were also contrary to predictions of food limitation theory. Our results demonstrate the inability of food limitation to explain differences in several life-history traits: clutch size, incubation behaviour, parental feeding rate and nestling growth rate. These life-history traits were better explained by inter-population differences in nest predation rates. Food limitation may be less important to life history evolution in birds than suggested by traditional theory. ?? 2005 The Royal Society.

  4. Evolution of gigantism in nine-spined sticklebacks.

    PubMed

    Herczeg, Gábor; Gonda, Abigél; Merilä, Juha

    2009-12-01

    The relaxation of predation and interspecific competition are hypothesized to allow evolution toward "optimal" body size in island environments, resulting in the gigantism of small organisms. We tested this hypothesis by studying a small teleost (nine-spined stickleback, Pungitius pungitius) from four marine and five lake (diverse fish community) and nine pond (impoverished fish community) populations. In line with theory, pond fish tended to be larger than their marine or lake conspecifics, sometimes reaching giant sizes. In two geographically independent cases when predatory fish had been introduced into ponds, fish were smaller than those in nearby ponds lacking predators. Pond fish were also smaller when found in sympatry with three-spined stickleback (Gasterosteus aculeatus) than those in ponds lacking competitors. Size-at-age analyses demonstrated that larger size in ponds was achieved by both increased growth rates and extended longevity of pond fish. Results from a common garden experiment indicate that the growth differences had a genetic basis: pond fish developed two to three times higher body mass than marine fish during 36 weeks of growth under similar conditions. Hence, reduced risk of predation and interspecific competition appear to be chief forces driving insular body size evolution toward gigantism.

  5. A computer program for sample size computations for banding studies

    USGS Publications Warehouse

    Wilson, K.R.; Nichols, J.D.; Hines, J.E.

    1989-01-01

    Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.

  6. Probability of coincidental similarity among the orbits of small bodies - I. Pairing

    NASA Astrophysics Data System (ADS)

    Jopek, Tadeusz Jan; Bronikowska, Małgorzata

    2017-09-01

    Probability of coincidental clustering among orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4,… members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We have tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample can be used to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.

  7. Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products

    NASA Astrophysics Data System (ADS)

    Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun

    2011-10-01

    To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."

  8. A geochemical sampling technique for use in areas of active alpine glaciation: an application from the central Alaska Range

    USGS Publications Warehouse

    Stephens, G.C.; Evenson, E.B.; Detra, D.E.

    1990-01-01

    In mountainous regions containing extensive glacier systems there is a lack of suitable material for conventional geochemical sampling. As a result, in most geochemical sampling programs a few stream-sediment samples collected at, or near, the terminus of valley glaciers are used to evaluate the mineral potential of the glaciated area. We have developed and tested a technique which utilizes the medial moraines of valley glaciers for systematic geochemical exploration of the glacial catchment area. Moraine sampling provides geochemical information that is site-specific in that geochemical anomalies can be traced directly up-ice to bedrock sources. Traverses were made across the Trident and Susitna glaciers in the central Alaska Range where fine-grained (clay to sand size) samples were collected from each medial moraine. These samples were prepared and chemically analyzed to determine the concentration of specific elements. Fifty pebbles were collected at each moraine for archival purposes and for subsequent lithologic identification. Additionally, fifty cobbles and fifty boulders were examined and described at each sample site to determine the nature and abundance of lithologies present in the catchment area, the extent and nature of visible mineralization, the presence and intensity of hydrothermal alteration and the existence of veins, dikes and other minor structural features. Results from the central Alaska Range have delineated four distinct multi-element anomalies which are a response to potential mineralization up-ice from the medial moraine traverse. By integrating the lithologic, mineralogical and geochemical data the probable geological setting of the geochemical anomalies is determined. ?? 1990.

  9. Human breath metabolomics using an optimized noninvasive exhaled breath condensate sampler

    PubMed Central

    Zamuruyev, Konstantin O.; Aksenov, Alexander A.; Pasamontes, Alberto; Brown, Joshua F.; Pettit, Dayna R.; Foutouhi, Soraya; Weimer, Bart C.; Schivo, Michael; Kenyon, Nicholas J.; Delplanque, Jean-Pierre; Davis, Cristina E.

    2017-01-01

    Exhaled breath condensate (EBC) analysis is a developing field with tremendous promise to advance personalized, non-invasive health diagnostics as new analytical instrumentation platforms and detection methods are developed. Multiple commercially-available and researcher-built experimental samplers are reported in the literature. However, there is very limited information available to determine an effective breath sampling approach, especially regarding the dependence of breath sample metabolomic content on the collection device design and sampling methodology. This lack of an optimal standard procedure results in a range of reported results that are sometimes contradictory. Here, we present a design of a portable human EBC sampler optimized for collection and preservation of the rich metabolomic content of breath. The performance of the engineered device is compared to two commercially available breath collection devices: the RTube™ and TurboDECCS. A number of design and performance parameters are considered, including: condenser temperature stability during sampling, collection efficiency, condenser material choice, and saliva contamination in the collected breath samples. The significance of the biological content of breath samples, collected with each device, is evaluated with a set of mass spectrometry methods and was the primary factor for evaluating device performance. The design includes an adjustable mass-size threshold for aerodynamic filtering of saliva droplets from the breath flow. Engineering an inexpensive device that allows efficient collection of metalomic-rich breath samples is intended to aid further advancement in the field of breath analysis for non-invasive health diagnostic. EBC sampling from human volunteers was performed under UC Davis IRB protocol 63701-3 (09/30/2014-07/07/2017). PMID:28004639

  10. Human breath metabolomics using an optimized non-invasive exhaled breath condensate sampler.

    PubMed

    Zamuruyev, Konstantin O; Aksenov, Alexander A; Pasamontes, Alberto; Brown, Joshua F; Pettit, Dayna R; Foutouhi, Soraya; Weimer, Bart C; Schivo, Michael; Kenyon, Nicholas J; Delplanque, Jean-Pierre; Davis, Cristina E

    2016-12-22

    Exhaled breath condensate (EBC) analysis is a developing field with tremendous promise to advance personalized, non-invasive health diagnostics as new analytical instrumentation platforms and detection methods are developed. Multiple commercially-available and researcher-built experimental samplers are reported in the literature. However, there is very limited information available to determine an effective breath sampling approach, especially regarding the dependence of breath sample metabolomic content on the collection device design and sampling methodology. This lack of an optimal standard procedure results in a range of reported results that are sometimes contradictory. Here, we present a design of a portable human EBC sampler optimized for collection and preservation of the rich metabolomic content of breath. The performance of the engineered device is compared to two commercially available breath collection devices: the RTube ™ and TurboDECCS. A number of design and performance parameters are considered, including: condenser temperature stability during sampling, collection efficiency, condenser material choice, and saliva contamination in the collected breath samples. The significance of the biological content of breath samples, collected with each device, is evaluated with a set of mass spectrometry methods and was the primary factor for evaluating device performance. The design includes an adjustable mass-size threshold for aerodynamic filtering of saliva droplets from the breath flow. Engineering an inexpensive device that allows efficient collection of metalomic-rich breath samples is intended to aid further advancement in the field of breath analysis for non-invasive health diagnostic. EBC sampling from human volunteers was performed under UC Davis IRB protocol 63701-3 (09/30/2014-07/07/2017).

  11. Sample size considerations using mathematical models: an example with Chlamydia trachomatis infection and its sequelae pelvic inflammatory disease.

    PubMed

    Herzog, Sereina A; Low, Nicola; Berghold, Andrea

    2015-06-19

    The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.

  12. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review

    PubMed Central

    Morris, Tom; Gray, Laura

    2017-01-01

    Objectives To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Setting Any, not limited to healthcare settings. Participants Any taking part in an SW-CRT published up to March 2016. Primary and secondary outcome measures The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Results Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22–0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Conclusions Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. PMID:29146637

  13. Challenges faced by primary care physicians when prescribing for patients with chronic diseases in a teaching hospital in Malaysia: a qualitative study.

    PubMed

    Sellappans, Renukha; Lai, Pauline Siew Mei; Ng, Chirk Jenn

    2015-08-27

    The aim of this study was to identify the challenges faced by primary care physicians (PCPs) when prescribing medications for patients with chronic diseases in a teaching hospital in Malaysia. 3 focus group discussions were conducted between July and August 2012 in a teaching primary care clinic in Malaysia. A topic guide was used to facilitate the discussions which were audio-recorded, transcribed verbatim and analysed using a thematic approach. PCPs affiliated to the primary care clinic were purposively sampled to include a range of clinical experience. Sample size was determined by thematic saturation of the data. 14 family medicine trainees and 5 service medical officers participated in this study. PCPs faced difficulties in prescribing for patients with chronic diseases due to a lack of communication among different healthcare providers. Medication changes made by hospital specialists, for example, were often not communicated to the PCPs leading to drug duplications and interactions. The use of paper-based medical records and electronic prescribing created a dual record system for patients' medications and became a problem when the 2 records did not tally. Patients sometimes visited different doctors and pharmacies for their medications and this resulted in the lack of continuity of care. PCPs also faced difficulties in addressing patients' concerns, and dealing with patients' medication requests and adherence issues. Some PCPs lacked time and knowledge to advise patients about their medications and faced difficulties in managing side effects caused by the patients' complex medication regimen. PCPs faced prescribing challenges related to patients, their own practice and the local health system when prescribing for patients with chronic diseases. These challenges must be addressed in order to improve chronic disease management in primary care and, more importantly, patient safety. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  14. Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples

    NASA Astrophysics Data System (ADS)

    Petit, Johan; Lallemant, Lucile

    2017-05-01

    In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.

  15. The relationship between national-level carbon dioxide emissions and population size: an assessment of regional and temporal variation, 1960-2005.

    PubMed

    Jorgenson, Andrew K; Clark, Brett

    2013-01-01

    This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region.

  16. Research gaps identified during systematic reviews of clinical trials: glass-ionomer cements.

    PubMed

    Mickenautsch, Steffen

    2012-06-29

    To report the results of an audit concerning research gaps in clinical trials that were accepted for appraisal in authored and published systematic reviews regarding the application of glass-ionomer cements (GIC) in dental practice Information concerning research gaps in trial precision was extracted, following a framework that included classification of the research gap reasons: 'imprecision of information (results)', 'biased information', 'inconsistency or unknown consistency' and 'not the right information', as well as research gap characterization using PICOS elements: population (P), intervention (I), comparison (C), outcomes (O) and setting (S). Internal trial validity assessment was based on the understanding that successful control for systematic error cannot be assured on the basis of inclusion of adequate methods alone, but also requires empirical evidence about whether such attempt was successful. A comprehensive and interconnected coverage of GIC-related clinical topics was established. The most common reasons found for gaps in trial precision were lack of sufficient trials and lack of sufficient large sample size. Only a few research gaps were ascribed to 'Lack of information' caused by focus on mainly surrogate trial outcomes. According to the chosen assessment criteria, a lack of adequate randomisation, allocation concealment and blinding/masking in trials covering all reviewed GIC topics was noted (selection- and detection/performance bias risk). Trial results appear to be less affected by loss-to-follow-up (attrition bias risk). This audit represents an adjunct of the systematic review articles it has covered. Its results do not change the systematic review's conclusions but highlight existing research gaps concerning the precision and internal validity of reviewed trials in detail. These gaps should be addressed in future GIC-related clinical research.

  17. Sample size calculation for a proof of concept study.

    PubMed

    Yin, Yin

    2002-05-01

    Sample size calculation is vital for a confirmatory clinical trial since the regulatory agencies require the probability of making Type I error to be significantly small, usually less than 0.05 or 0.025. However, the importance of the sample size calculation for studies conducted by a pharmaceutical company for internal decision making, e.g., a proof of concept (PoC) study, has not received enough attention. This article introduces a Bayesian method that identifies the information required for planning a PoC and the process of sample size calculation. The results will be presented in terms of the relationships between the regulatory requirements, the probability of reaching the regulatory requirements, the goalpost for PoC, and the sample size used for PoC.

  18. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    PubMed

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  19. Methodological reporting quality of randomized controlled trials: A survey of seven core journals of orthopaedics from Mainland China over 5 years following the CONSORT statement.

    PubMed

    Zhang, J; Chen, X; Zhu, Q; Cui, J; Cao, L; Su, J

    2016-11-01

    In recent years, the number of randomized controlled trials (RCTs) in the field of orthopaedics is increasing in Mainland China. However, randomized controlled trials (RCTs) are inclined to bias if they lack methodological quality. Therefore, we performed a survey of RCT to assess: (1) What about the quality of RCTs in the field of orthopedics in Mainland China? (2) Whether there is difference between the core journals of the Chinese department of orthopedics and Orthopaedics Traumatology Surgery & Research (OTSR). This research aimed to evaluate the methodological reporting quality according to the CONSORT statement of randomized controlled trials (RCTs) in seven key orthopaedic journals published in Mainland China over 5 years from 2010 to 2014. All of the articles were hand researched on Chongqing VIP database between 2010 and 2014. Studies were considered eligible if the words "random", "randomly", "randomization", "randomized" were employed to describe the allocation way. Trials including animals, cadavers, trials published as abstracts and case report, trials dealing with subgroups analysis, or trials without the outcomes were excluded. In addition, eight articles selected from Orthopaedics Traumatology Surgery & Research (OTSR) between 2010 and 2014 were included in this study for comparison. The identified RCTs are analyzed using a modified version of the Consolidated Standards of Reporting Trials (CONSORT), including the sample size calculation, allocation sequence generation, allocation concealment, blinding and handling of dropouts. A total of 222 RCTs were identified in seven core orthopaedic journals. No trials reported adequate sample size calculation, 74 (33.4%) reported adequate allocation generation, 8 (3.7%) trials reported adequate allocation concealment, 18 (8.1%) trials reported adequate blinding and 16 (7.2%) trials reported handling of dropouts. In OTSR, 1 (12.5%) trial reported adequate sample size calculation, 4 (50.0%) reported adequate allocation generation, 1 (12.5%) trials reported adequate allocation concealment, 2 (25.0%) trials reported adequate blinding and 5 (62.5%) trials reported handling of dropouts. There were statistical differences as for sample size calculation and handling of dropouts between papers from Mainland China and OTSR (P<0.05). The findings of this study show that the methodological reporting quality of RCTs in seven core orthopaedic journals from the Mainland China is far from satisfaction and it needs to further improve to keep up with the standards of the CONSORT statement. Level III case control. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  20. Variation in aluminum, iron, and particle concentrations in oxic groundwater samples collected by use of tangential-flow ultrafiltration with low-flow sampling

    NASA Astrophysics Data System (ADS)

    Szabo, Zoltan; Oden, Jeannette H.; Gibs, Jacob; Rice, Donald E.; Ding, Yuan

    2002-02-01

    Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering.

  1. Deliberate self-harm in older adults: a review of the literature from 1995 to 2004.

    PubMed

    Chan, Jenifer; Draper, Brian; Banerjee, Sube

    2007-08-01

    The prevention of suicide is a national and international policy priority. Old age is an important predictor of completed suicide. Suicide rates in old age differ markedly from country to country but there is a general trend towards increasing rates with increasing age. In 1996 Draper reviewed critically the evidence on attempted suicide in old age in the 10 years between 1985 and 1994. The review highlighted a need for prospective controlled studies in older people with more representative samples as well as studies examining the interaction of risk factors, precipitants, motivations, psychopathology and response to treatment. The aim of this paper is to update this review and to summarise the advances in our understanding of DSH in later life. We have critically reviewed relevant studies published between 1995 and 2004 to summarise the advances in our understanding of factors associated with deliberate self-harm in later life. The main advances in understanding have been to clarify the effect of personality and cultural factors, service utilisation pre and post attempt, and the (lesser) impact of socio-economic status and physical illness. Methodological weaknesses continue to include inadequate sample sizes performed on highly selected populations, inconsistent age criteria and lack of informant data on studies relating to role of personality. Future studies should include prospective, cross-cultural research with adequate sample sizes and which are population-based. Such approaches might confirm or refute the results generated to date and improve knowledge on factors such as the biological correlates of deliberate self-harm, service utilisation, costs and barriers to health care, and the interaction of these factors. Intervention studies to elucidate the impact of modifying these factors and of specific treatment packages are also needed.

  2. Comet Dust: The Diversity of "Primitive" Particles and Implications

    NASA Technical Reports Server (NTRS)

    Wooden, Diane H.; Ishii, Hope A.; Bradley, John P.; Zolensky, Michael E.

    2016-01-01

    Comet dust is primitive and shows significant diversity. Our knowledge of the properties of primitive particles has expanded significantly through microscale investigations of cosmic dust samples ( IDP's(Interplanetary Dust Particles) and AMM's (Antarctic Micrometeorites)) and of comet dust samples (Stardust and Rosetta's COSIMA), as well as through remote sensing (spectroscopy and imaging) via Spitzer and via spacecraft encounters with 103P/Hartley 2 and 67P/Churyumov-Gerasimenko. Microscale investigations show that comet dust and cosmic dust are particles of unequilibrated materials, including aggregates of materials unequilibrated at submicron scales. We call unequilibrated materials "primitive" and we deduce they were incorporated into ice-rich (H2O-, CO2-, and CO-ice) parent bodies that remained cold, i.e., into comets, because of the lack of aqueous or thermal alteration since particle aggregation; yet some Stardust olivines suggest mild thermal metamorphism. Primitive particles exhibit a diverse range of: structure and typology; size and size distribution of constituents; concentration and form of carbonaceous and organic matter; D-, N-, and O- isotopic enhancements over solar; Mg-, Fe-contents of the silicate minerals; the compositions and concentrations of sulfides, and of less abundant mineral species such as chondrules, CAIs and carbonates. The uniformity within a group of samples points to: aerodynamic sorting of particles and/or particle constituents; the inclusion of a limited range of oxygen fugacities; the inclusion or exclusion of chondrules; a selection of organics. The properties of primitive particles imply there were disk processes that resulted in different comets having particular selections of primitive materials. The diversity of primitive particles has implications for the diversity of materials in the protoplanetary disk present at the time and in the region where the comets formed.

  3. Comet Dust: The Diversity of Primitive Particles and Implications

    NASA Technical Reports Server (NTRS)

    John Bradley; Zolensky, Michael E.

    2016-01-01

    Comet dust is primitive and shows significant diversity. Our knowledge of the properties of primitive particles has expanded significantly through microscale investigations of cosmic dust samples (IDPs and AMMs) and of comet dust samples (Stardust and Rosetta's COSIMA), as well as through remote sensing (spectroscopy and imaging) via Spitzer and via spacecraft encounters with 103P/Hartley 2 and 67P/Churyumov-Gerasimenko. Microscale investigations show that comet dust and cosmic dust are particles of unequilibrated materials, including aggregates of materials unequilibrated at submicron scales. We call unequilibrated materials "primitive" and we deduce they were incorporated into ice-­-rich (H2O-, CO2-, and CO-ice) parent bodies that remained cold, i.e., into comets, because of the lack of aqueous or thermal alteration since particle aggregation; yet some Stardust olivines suggest mild thermal metamorphism. Primitive particles exhibit a diverse range of: structure and typology; size and size distribution of constituents; concentration and form of carbonaceous and organic matter; D-, N-, and O- isotopic enhancements over solar; Mg-, Fe-contentsof thesilicate minerals; the compositions and concentrations of sulfides, and of less abundant mineral species such as chondrules, CAIs and carbonates. The unifomity within a group of samples points to: aerodynamic sorting of particles and/or particle constituents; the inclusion of a limited range of oxygen fugacities; the inclusion or exclusion of chondrules; a selection of organics. The properites of primitive particles imply there were disk processes that resulted in different comets having particular selections of primitive materials. The diversity of primitive particles has implications for the diversity of materials in the protoplanetary disk present at the time and in the region where the comets formed.

  4. Research profile of physiotherapy undergraduates in Nigeria.

    PubMed

    Adeniyi, Ade F; Ekechukwu, Nelson E; Umar, Lawan; Ogwumike, Omoyemi O

    2013-01-01

    Physiotherapy training in Nigeria is almost 50 years old with no history of appraisal of research projects produced by the physiotherapy students. Physiotherapy students complete research projects in partial fulfilment of the requirements for graduation. An appraisal will reveal areas of strength and weakness in the research requirement for students, potentially leading to better research capacity and promoting evidence-based clinical practice among graduates. This study describes issues related to the study design, scope, statistical analysis and supervision of physiotherapy undergraduates in Nigerian universities. This retrospective study analysed 864 projects undertaken by Nigerian physiotherapy students between years 2000 and 2010. A maximum of 20 projects per academic year were randomly selected from each of the seven physiotherapy institutions in Nigeria. Data were obtained using a self-designed data retrieval form and analysed using descriptive and inferential statistics. Cross-sectional surveys constituted 47.6% of the research projects with mainly non-probability sampling (57.7%) and lack of objective sample size determination in 91.6% of the projects. Most projects (56.4%) did not report any ethical approval. The particular university attended (χ2 = 109.5, P = 0.0001), type of degree offered (χ2 = 47.24, P = 0.00001) and the academic qualification of supervisors (χ2 = 21.99, P = 0.001) were significantly related to the strength of the research design executed by students. Most research projects carried out by Nigerian physiotherapy students were cross-sectional, characterised by arbitrary sample sizes, and were conducted on human subjects but most without report of ethical approval. Efforts to improve research methodology, documentation and exploration of a wider range of research areas are needed to strengthen this educational experience for students.

  5. Simulation Studies as Designed Experiments: The Comparison of Penalized Regression Models in the “Large p, Small n” Setting

    PubMed Central

    Chaibub Neto, Elias; Bare, J. Christopher; Margolin, Adam A.

    2014-01-01

    New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where “omics” features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights. PMID:25289666

  6. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    PubMed

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  7. Sample size determination for equivalence assessment with multiple endpoints.

    PubMed

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  8. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    PubMed

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  9. Magnetic properties of the upper mantle beneath the continental United States

    NASA Astrophysics Data System (ADS)

    Friedman, S. A.; Ferre, E. C.; Demory, F.; Rochette, P.; Martin Hernandez, F.; Conder, J. A.

    2012-12-01

    The interpretation of long wavelength satellite magnetic data (Magsat, Oersted, CHAMP, SWARM) requires an understanding of magnetic mineralogy in the lithospheric mantle and reliable models of induced and remanent magnetic sources in the lithospheric mantle and the crust. Blakely et al. (2005) proposed the hypothesis of a magnetic lithospheric mantle in subduction zones. This prompted us to reexamine magnetic sources in the lithospheric mantle in different tectonic settings where unaltered mantle xenolith have been reported since the 1990s. Xenoliths from the upper mantle beneath the continental United States show different magnetic properties depending on the tectonic setting in which they equilibrated. Three localities in the South Central United States (San Carlos, AZ; Kilbourne Hole, NM; Knippa, TX) produced lherzolite and harzburgite xenoliths, while the Bearpaw Mountains in Montana (subduction zone) produced dunite and phlogopite-rich dunite xenoliths. Paleomagnetic data on these samples shows the lack of secondary alteration which is commonly caused by post-eruption serpentinization and the lack of basalt contamination. The main magnetic carrier is pure magnetite. The ascent of mantle xenoliths to the surface of the Earth generally takes only a few hours. Numerical modelling shows that nucleation of magnetite during ascent would form superparamagnetic grains and therefore cannot explain the observed magnetic grain sizes. This implies that the ferromagnetic phases present in the studied samples formed at mantle depth. The samples from the South Central United States exhibit a small range in low-field magnetic susceptibility (+/- 0.00003 [SI]), and Natural Remanent Magnetization (NRM) between 0.001 - 0.100 A/m. To the contrary samples from the Bearpaw Mountains exhibit a wider range of low-field susceptibilities (0.00001 to 0.0015 [SI]) and NRM (0.01 and 9.00 A/m). These samples have been serpentinized in-situ by metasomatic fluids related to the Farallon plate (Facer et al., 2009). Hence, the magnetic properties of the lithospheric mantle beneath the continental United States differ significantly depending on tectonic setting. The combination of the low geotherm observed in the Bearpaw Mountains with the stronger induced and remanent magnetization of mantle rocks in this area may produce a detectable LWMA.

  10. Ultrasound vs. Computed Tomography for Severity of Hydronephrosis and Its Importance in Renal Colic.

    PubMed

    Leo, Megan M; Langlois, Breanne K; Pare, Joseph R; Mitchell, Patricia; Linden, Judith; Nelson, Kerrie P; Amanti, Cristopher; Carmody, Kristin A

    2017-06-01

    Supporting an "ultrasound-first" approach to evaluating renal colic in the emergency department (ED) remains important for improving patient care and decreasing healthcare costs. Our primary objective was to compare emergency physician (EP) ultrasound to computed tomography (CT) detection of hydronephrosis severity in patients with suspected renal colic. We calculated test characteristics of hydronephrosis on EP-performed ultrasound for detecting ureteral stones or ureteral stone size >5mm. We then analyzed the association of hydronephrosis on EP-performed ultrasound, stone size >5mm, and proximal stone location with 30-day events. This was a prospective observational study of ED patients with suspected renal colic undergoing CT. Subjects had an EP-performed ultrasound evaluating for the severity of hydronephrosis. A chart review and follow-up phone call was performed. We enrolled 302 subjects who had an EP-performed ultrasound. CT and EP ultrasound results were comparable in detecting severity of hydronephrosis ( x 2 =51.7, p<0.001). Hydronephrosis on EP-performed ultrasound was predictive of a ureteral stone on CT (PPV 88%; LR+ 2.91), but lack of hydronephrosis did not rule it out (NPV 65%). Lack of hydronephrosis on EP-performed ultrasound makes larger stone size >5mm less likely (NPV 89%; LR- 0.39). Larger stone size > 5mm was associated with 30-day events (OR 2.30, p=0.03). Using an ultrasound-first approach to detect hydronephrosis may help physicians identify patients with renal colic. The lack of hydronephrosis on ultrasound makes the presence of a larger ureteral stone less likely. Stone size >5mm may be a useful predictor of 30-day events.

  11. Ultrasound vs. Computed Tomography for Severity of Hydronephrosis and Its Importance in Renal Colic

    PubMed Central

    Leo, Megan M.; Langlois, Breanne K.; Pare, Joseph R.; Mitchell, Patricia; Linden, Judith; Nelson, Kerrie P.; Amanti, Cristopher; Carmody, Kristin A.

    2017-01-01

    Introduction Supporting an “ultrasound-first” approach to evaluating renal colic in the emergency department (ED) remains important for improving patient care and decreasing healthcare costs. Our primary objective was to compare emergency physician (EP) ultrasound to computed tomography (CT) detection of hydronephrosis severity in patients with suspected renal colic. We calculated test characteristics of hydronephrosis on EP-performed ultrasound for detecting ureteral stones or ureteral stone size >5mm. We then analyzed the association of hydronephrosis on EP-performed ultrasound, stone size >5mm, and proximal stone location with 30-day events. Methods This was a prospective observational study of ED patients with suspected renal colic undergoing CT. Subjects had an EP-performed ultrasound evaluating for the severity of hydronephrosis. A chart review and follow-up phone call was performed. Results We enrolled 302 subjects who had an EP-performed ultrasound. CT and EP ultrasound results were comparable in detecting severity of hydronephrosis (x2=51.7, p<0.001). Hydronephrosis on EP-performed ultrasound was predictive of a ureteral stone on CT (PPV 88%; LR+ 2.91), but lack of hydronephrosis did not rule it out (NPV 65%). Lack of hydronephrosis on EP-performed ultrasound makes larger stone size >5mm less likely (NPV 89%; LR− 0.39). Larger stone size > 5mm was associated with 30-day events (OR 2.30, p=0.03). Conclusion Using an ultrasound-first approach to detect hydronephrosis may help physicians identify patients with renal colic. The lack of hydronephrosis on ultrasound makes the presence of a larger ureteral stone less likely. Stone size >5mm may be a useful predictor of 30-day events. PMID:28611874

  12. Comparative Assessment of Induced Immune Responses Following Intramuscular Immunization with Fusion and Cocktail of LeIF, LACK and TSA Genes Against Cutaneous Leishmaniasis in BALB/c Mice.

    PubMed

    Maspi, Nahid; Ghaffarifar, Fatemeh; Sharifi, Zohreh; Dalimi, Abdolhossein; Dayer, Mohammad Saaid

    2018-02-01

    In the present study, we evaluated induced immune responses following DNA vaccine containing cocktail or fusion of LeIF, LACK and TSA genes or each gene alone. Mice were injected with 100 µg of each plasmid containing the gene of insert, plasmid DNA alone as the first control group or phosphate buffer saline as the second control group. Then, cellular and humoral responses, lesion size were measured for all groups. All vaccinated mice induced Th1 immune responses against Leishmania characterized by higher IFN-γ and IgG2a levels compared with control groups (p < 0.05). In addition, IFN-γ levels increased in groups immunized with fusion and cocktail vaccines in comparison with LACK (p < 0.001) and LeIF (p < 0.01) groups after challenge. In addition, fusion and cocktail groups produced higher IgG2a values than groups vaccinated with a gene alone (p < 0.05). Lesion progression delayed for all immunized groups compared with control groups from 5th week post-infection (p < 0.05). Mean lesion size decreased in immunized mice with fusion DNA than three groups vaccinated with one gene alone (p < 0.05). While, lesion size decreased significantly in cocktail recipient group than LeIF recipient group (p < 0.05). There was no difference in lesion size between fusion and cocktail groups. Overall, immunized mice with cocktail and fusion vaccines showed stronger Th1 response by production of higher IFN-γ and IgG2a and showed smaller mean lesion size. Therefore, use of multiple antigens can improve induced immune responses by DNA vaccination.

  13. Lack of a close confidant: prevalence and correlates in a medically underserved primary care sample.

    PubMed

    Newton, Tamara; Buckley, Amy; Zurlage, Megan; Mitchell, Charlene; Shaw, Ann; Woodruff-Borden, Janet

    2008-03-01

    The present study examined prevalence of lack of a close confidant in a medically underserved primary care sample, and evaluated demographic, medical, and psychological correlates of patients' deficits in close, personal contact. Adult patients (n = 413) reported on confidant status and symptoms of depression and anxiety. Sociodemographic and medical information were obtained through chart review. One-quarter of patients endorsed lack of a close confidant. Past month anxiety and depression symptoms, but not medical status, were associated with unmet socioemotional needs. Implications for primary healthcare interventions are discussed.

  14. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size.

    PubMed

    Heidel, R Eric

    2016-01-01

    Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  15. Exploring the impact of wheelchair design on user function in a rural South African setting.

    PubMed

    Visagie, Surona; Duffield, Svenje; Unger, Mariaan

    2015-01-01

    Wheelchairs provide mobility that can enhance function and community integration. Function in a wheelchair is influenced by wheelchair design. To explore the impact of wheelchair design on user function and the variables that guided wheelchair prescription in the study setting. A mixed-method, descriptive design using convenience sampling was implemented. Quantitative data were collected from 30 wheelchair users using the functioning every day with a Wheelchair Scale and a Wheelchair Specification Checklist. Qualitative data were collected from ten therapists who prescribed wheelchairs to these users, through interviews. The Kruskal-Wallis test was used to identify relationships, and content analysis was undertaken to identify emerging themes in qualitative data. Wheelchairs with urban designs were issued to 25 (83%) participants. Wheelchair size, fit, support and functional features created challenges concerning transport, operating the wheelchair, performing personal tasks, and indoor and outdoor mobility. Users using wheelchairs designed for use in semi-rural environments achieved significantly better scores regarding the appropriateness of the prescribed wheelchair than those using wheelchairs designed for urban use ( p = <0.01). Therapists prescribed the basic, four-wheel folding frame design most often because of a lack of funding, lack of assessment, lack of skills and user choice. Issuing urban type wheelchairs to users living in rural settings might have a negative effect on users' functional outcomes. Comprehensive assessments, further training and research, on long term cost and quality of life implications, regarding provision of a suitable wheelchair versus a cheaper less suitable option is recommended.

  16. Symptom dimensions of affective disorders in migraine patients.

    PubMed

    Louter, M A; Pijpers, J A; Wardenaar, K J; van Zwet, E W; van Hemert, A M; Zitman, F G; Ferrari, M D; Penninx, B W; Terwindt, G M

    2015-11-01

    A strong association has been established between migraine and depression. However, this is the first study to differentiate in a large sample of migraine patients for symptom dimensions of the affective disorder spectrum. Migraine patients (n=3174) from the LUMINA (Leiden University Medical Centre Migraine Neuro-analysis Program) study and patients with current psychopathology (n=1129), past psychopathology (n=477), and healthy controls (n=561) from the NESDA (Netherlands Study of Depression and Anxiety) study, were compared for three symptom dimensions of depression and anxiety. The dimensions -lack of positive affect (depression specific); negative affect (nonspecific); and somatic arousal (anxiety specific)- were assessed by a shortened adaptation of the Mood and Anxiety Symptom Questionnaire (MASQ-D30). Within the migraine group, the association with migraine specific determinants was established. Multivariate regression analyses were conducted. Migraine patients differed significantly (p<0.001) from healthy controls for all three dimensions: Cohen's d effect sizes were 0.37 for lack of positive affect, 0.68 for negative affect, and 0.75 for somatic arousal. For the lack of positive affect and negative affect dimensions, migraine patients were predominantly similar to the past psychopathology group. For the somatic arousal dimension, migraine patients scores were more comparable with the current psychopathology group. Migraine specific determinants for high scores on all dimensions were high frequency of attacks and cutaneous allodynia during attacks. This study shows that affective symptoms in migraine patients are especially associated with the somatic arousal component. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Lack of efficacy of moclobemide or imipramine in the treatment of recurrent brief depression: results from an exploratory randomized, double-blind, placebo-controlled treatment study.

    PubMed

    Baldwin, David S; Green, Mary; Montgomery, Stuart A

    2014-11-01

    'Recurrent brief depression' (RBD) is a common, distressing and impairing depressive disorder for which there is no current proven pharmacological or psychological treatment. This multicentre, randomized, fixed-dose, parallel-group, placebo-controlled study of the reversible inhibitor of monoamine oxidase moclobemide (450 mg/day) and the tricyclic antidepressant imipramine (150 mg/day) evaluated the potential efficacy of active medication, when compared with placebo, in patients with recurrent brief depression, recruited in the mid-1990s. After a 2-4-week single-blind placebo run-in period, a total of 35 patients were randomized to receive double-blind medication for 4 months, but only 16 completed the active treatment period. An intention-to-treat analysis of the 34 evaluable patients found no evidence for the efficacy of moclobemide or imipramine, when compared with placebo, in significantly reducing the severity, duration or frequency of depressive episodes. A total of 28 patients experienced at least one adverse event, and four patients engaged in nonfatal self-harm. Limitations of the study include the small sample size and the high rate of participant withdrawal. The lack of efficacy of these antidepressant drugs and the previous finding of the lack of efficacy of the selective serotonin reuptake inhibitor fluoxetine together indicate that medications other than antidepressant drugs should be investigated as potential treatments for what remains a common, distressing and potentially hazardous condition.

  18. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    PubMed

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  19. Emergence times and airway reactions in general laryngeal mask airway anesthesia: study protocol for a randomized controlled trial.

    PubMed

    Stevanovic, Ana; Rossaint, Rolf; Keszei, András P; Fritz, Harald; Fröba, Gebhard; Pühringer, Friedrich; Coburn, Mark

    2015-07-26

    The use of a laryngeal mask airway (LMA) in appropriate patients supports fast-track anesthesia with a lower incidence of postoperative airway-connected adverse events. Data on the most favorable anesthetic in this context, with the lowest rate of upper airway complications and fast emergence times, are controversial and limited. Desflurane seems to match these criteria best, but large randomized controlled trials (RCTs) with a standardized study protocol are lacking. Therefore, we aim to compare desflurane with other commonly used anesthetics, sevoflurane and propofol, in a sufficiently powered RCT. We hypothesize that desflurane is noninferior regarding the frequency of upper airway events and superior regarding the emergence times to sevoflurane and propofol. A total of 351 patients undergoing surgery with an LMA will be included in this prospective, randomized, double-blind controlled, multicenter clinical trial. The patients will be randomly assigned to the three treatment arms: desflurane (n = 117), sevoflurane (n = 117), and propofol (n = 117). The emergence time (time to state the date of birth) will be the primary endpoint of this study. The secondary endpoints include further emergence times, such as time to open eyes, to remove LMA, to respond to command and to state name. Additionally, we will determine the frequency of cough and laryngospasm, measured intraoperatively and at emergence. We will assess the postoperative recovery on the first postoperative day via the Postoperative Quality Recovery Scale. Despite increasing importance of cost-effective and safe anesthesia application, we lack proof for the most advantageous anesthetic agent, when an LMA is used. There are only a few RCTs comparing desflurane to other commonly used anesthetics (sevoflurane, propofol and isoflurane) in patients with LMA. These RCTs were conducted with small sample sizes, huge interstudy variability, and some also showed strong biases. The present multicenter RCT will provide results from a large sample size with a standardized study protocol and minimized bias, which is feasible in the clinical routine. Furthermore, we will expand our knowledge regarding the most favorable recovery on the first postoperative day, which impacts patients' comfort after surgery. EudraCT Identifier: 2014-003810-96, 5 September 2014 ClinicalTrials.gov: NCT02322502, December 2014.

  20. Effect of Inhalation of Lavender Essential Oil on Vital Signs in Open Heart Surgery ICU.

    PubMed

    Salamati, Armaiti; Mashouf, Soheyla; Mojab, Faraz

    2017-01-01

    This study evaluated the effects of inhalation of Lavender essential oil on vital signs in open heart surgery ICU. The main complaint of patients after open-heart surgery is dysrhythmia, tachycardia, and hypertension due to stress and pain. Due to the side effects of chemical drugs, such as opioids, use of non-invasive methods such as aromatherapy for relieving stress and pain parallel to chemical agents could be an important way to decrease the dose and side effects of analgesics. In a multicenter, single-blind trial, 40 patients who had open-heart surgery were recruited. Inclusion criteria were full consciousness, lack of hemorrhage, heart rate >60 beats/min, systolic blood pressure > 100 mmHg, and diastolic blood pressure > 60 mmHg, not using beta blockers in the operating room or ICU, no history of addiction to opioids or use of analgesics in regular, spontaneous breathing ability and not receiving synthetic opioids within 2 h before extubation. Ten minutes after extubation, the patients› vital signs [including BP, HR, Central Venous Pressure (CVP), SPO2, and RR] were measured. Then, a cotton swab, which was impregnated with 2 drops of Lavender essential oil 2%, was placed in patients' oxygen mask and patients breathed for 10 min. Thirty minutes after aromatherapy, the vital signs were measured again. Main objective of this study was the change in vital sign before and after aromatherapy. Statistical significance was accepted for P < 0.05. There was a significant difference in systolic blood pressure (p > 0.001), diastolic blood pressure (p = 0.001), and heart rate (p = 0.03) before and after the intervention using paired t-test. Although, the results did not show any significant difference in respiratory rate (p = 0.1), SpO2 (p = 0.5) and CVP (p = 0.2) before and after inhaling Lavender essential oil. Therefore, the aromatherapy could effectively reduce blood pressure and heart rate in patients admitted to the open heart surgery ICU and can be used as an independent nursing intervention in stabilizing mentioned vital signs. The limitations of our study were sample size and lack of control group. Randomized clinical trials with larger sample size are recommended.

  1. 40 CFR 80.127 - Sample size guidelines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section 80.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the...

  2. Methods for sample size determination in cluster randomized trials

    PubMed Central

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-01-01

    Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515

  3. An imbalance in cluster sizes does not lead to notable loss of power in cross-sectional, stepped-wedge cluster randomised trials with a continuous outcome.

    PubMed

    Kristunas, Caroline A; Smith, Karen L; Gray, Laura J

    2017-03-07

    The current methodology for sample size calculations for stepped-wedge cluster randomised trials (SW-CRTs) is based on the assumption of equal cluster sizes. However, as is often the case in cluster randomised trials (CRTs), the clusters in SW-CRTs are likely to vary in size, which in other designs of CRT leads to a reduction in power. The effect of an imbalance in cluster size on the power of SW-CRTs has not previously been reported, nor what an appropriate adjustment to the sample size calculation should be to allow for any imbalance. We aimed to assess the impact of an imbalance in cluster size on the power of a cross-sectional SW-CRT and recommend a method for calculating the sample size of a SW-CRT when there is an imbalance in cluster size. The effect of varying degrees of imbalance in cluster size on the power of SW-CRTs was investigated using simulations. The sample size was calculated using both the standard method and two proposed adjusted design effects (DEs), based on those suggested for CRTs with unequal cluster sizes. The data were analysed using generalised estimating equations with an exchangeable correlation matrix and robust standard errors. An imbalance in cluster size was not found to have a notable effect on the power of SW-CRTs. The two proposed adjusted DEs resulted in trials that were generally considerably over-powered. We recommend that the standard method of sample size calculation for SW-CRTs be used, provided that the assumptions of the method hold. However, it would be beneficial to investigate, through simulation, what effect the maximum likely amount of inequality in cluster sizes would be on the power of the trial and whether any inflation of the sample size would be required.

  4. Useful Effect Size Interpretations for Single Case Research

    ERIC Educational Resources Information Center

    Parker, Richard I.; Hagan-Burke, Shanna

    2007-01-01

    An obstacle to broader acceptability of effect sizes in single case research is their lack of intuitive and useful interpretations. Interpreting Cohen's d as "standard deviation units difference" and R[superscript 2] as "percent of variance accounted for" do not resound with most visual analysts. In fact, the only comparative analysis widely…

  5. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach. [Kansas

    NASA Technical Reports Server (NTRS)

    Hixson, M. M.; Bauer, M. E.; Davis, B. J.

    1979-01-01

    The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.

  6. TNOs as probes of planet building: the Plutino size- & colour-distributions

    NASA Astrophysics Data System (ADS)

    Alexandersen, Mike; Gladman, Brett; Kavelaars, JJ; Petit, Jean-Marc; Gwyn, Stephen; Pike, Rosemary E.; Shankman, Cory

    2015-01-01

    Planetesimals are the building blocks of giant planet cores; some are preserved as large transneptunian objects (TNOs). Previous work concluded steep power-law size-distributions for TNOs of diameters > 100 km. Recent results claim a dramatic roll-over or divot (sudden drop in number of objects at a transition size) in the size-distribution of Neptunian Trojans and scattering TNOs, with a significant lack of intermediate-size D<100 km planetesimals. One theoretical explanation is that planetesimals were born big, skipping the intermediate sizes, contrary to the expectation of bottom-up planetesimal formation.Using the Canada-France-Hawaii Telescope, our 32 sq.deg. survey, near RA=2 hr with limiting magnitude m_r=24.6, detected and tracked 77 TNOs and Centaurs for up to 28 months, providing both the high-quality orbits and the quantitative detection efficiency needed for precise modelling. We used the 18 Plutinos (3:2 Neptunian mean motion resonance) from our survey to constrain the size- and orbital-distribution model of this population. We show that the Plutino size-distribution cannot continue as a rising power-law past H_r ˜ 8.3 (D˜ 100 km); a sharp dramatic change must occur near this point. A single power-law is rejectable at >99% confidence; a double power law cannot be rejected outright, but appears to be a uncomfortable match to the available data. A divot, with the parameters found independently for scattering TNOs by Shankman et al. (2013, ApJ vol 764), provides an excellent match; the best match, found from an extensive parameter search, comes with only slightly different parameters; this size-distribution also satisfies the known Neptunian Trojan data.We also present g-r photometric colours for our Plutino sample, obtained with the Gemini North telescope in 2013-2014.Both large TNOs and small nearby Centaurs are known to feature a bimodal colour-distribution; however, recent work (Peixinho et al. 2012, A&A vol 546) has suggested that intermediate-size TNOs may not show bimodality. Our telescopically-expensive endeavour has provided us with unique insight into the colour-distribution of the physically smallest Plutinos.

  7. TNOs as probes of planet building: the Plutino size- & colour-distributions

    NASA Astrophysics Data System (ADS)

    Alexandersen, Mike; Gladman, Brett; Kavelaars, Jj; Petit, Jean-Marc; Gwyn, Stephen; Shankman, Cory; Pike, Rosemary

    2014-11-01

    Planetesimals are the building blocks of giant planet cores; some are preserved as large transneptunian objects (TNOs). Previous work concluded steep power-law size-distributions for TNOs of diameters > 100 km. Recent results claim a dramatic roll-over or divot (sudden drop in number of objects at a transition size) in the size-distribution of Neptunian Trojans and scattering TNOs, with a significant lack of intermediate-size D<100 km planetesimals. One theoretical explanation is that planetesimals were born big, skipping the intermediate sizes, contrary to the expectation of bottom-up planetesimal formation. Using the Canada-France-Hawaii Telescope, our 32 sq.deg. survey, near RA=2 hr with limiting magnitude m_r=24.6, detected and tracked 77 TNOs and Centaurs for up to 28 months, providing both the high-quality orbits and the quantitative detection efficiency needed for precise modelling. We used the 18 Plutinos (3:2 Neptunian resonance) from our survey to constrain the size- and orbital-distribution model of this population. We show that the Plutino size-distribution cannot continue as a rising power-law past H_r˜ 8.3 (D˜ 100 km); a sharp dramatic change must occur near this point. A single power-law is rejectable at >99% confidence; a double power law cannot be rejected outright, but appears to be a uncomfortable match to the available data. A divot, with the parameters found independently for scattering TNOs by Shankman et al. (2013, ApJ vol 764), provides an excellent match; the best match, found from an extensive parameter search, comes with only slightly different parameters; this size-distribution also satisfies the known Neptunian Trojan data. Both large TNOs and small nearby Centaurs are known to feature a bimodal colour-distribution; however, recent work (Peixinho et al. 2012, A&A vol 546) has suggested that intermediate-size TNOs may not show bimodality. We present g-r photometric colours for our Plutino sample, obtained with the Gemini North telescope in 2013-2014. This telescopically-expensive endeavour has provided us with unique insight into the colour-distribution of the physically smallest Plutinos.

  8. Differences in Size Selectivity and Catch Composition Between Two Bottom Trawls Used in High-Arctic Surveys of Bottom Fishes, Crabs and Other Demersal Macrofauna

    NASA Astrophysics Data System (ADS)

    Lauth, R.; Norcross, B.; Kotwicki, S.; Britt, L.

    2016-02-01

    Long-term monitoring of the high-Arctic marine biota is needed to understand how the ecosystem is changing in response to climate change, diminishing sea-ice, and increasing anthropogenic activity. Since 1959, bottom trawls (BT) have been a primary research tool for investigating fishes, crabs and other demersal macrofauna in the high-Arctic. However, sampling gears, methodologies, and the overall survey designs used have generally lacked consistency and/or have had limited spatial coverage. This has restricted the ability of scientists and managers to effectively use existing BT survey data for investigating historical trends and zoogeographic changes in high-Arctic marine populations. Two different BTs currently being used for surveying the high-Arctic are: 1) a small-mesh 3-m plumb-staff beam trawl (PSBT), and 2) a large-mesh 83-112 Eastern bottom trawl (EBT). A paired comparison study was conducted in 2012 to compare catch composition and the sampling characteristics of the two different trawl gears, and a size selectivity ratio statistic was used to investigate how the probability of fish and crab retention differs between the EBT and PBST. Obvious contrasting characteristics of the PSBT and EBT were mesh size, area-swept, tow speed, and vertical opening. The finer mesh and harder bottom-tending characteristics of the PSBT retained juvenile fishes and other smaller macroinvertebrates and it was also more efficient catching benthic infauna that were just below the surface. The EBT had a larger net opening with greater tow duration at a higher speed that covered a potentially wider range of benthic habitats during a single tow, and it was more efficient at capturing larger and more mobile organisms, as well as organisms that were further off bottom. The ratio statistic indicated large differences in size selectivity between the two gears for both fish and crab. Results from this investigation will provide a framework for scientists and mangers to better understand how to interpret and compare data from existing PBST and EBT surveys in the high-Arctic, and the results provide information on factors worth considering in choosing what BT gear to use for a standardized long-term BT sampling program to monitor fishes, crabs and other demersal macrofauna in the high-Arctic.

  9. Structural Magnetic Resonance Imaging Correlates of Aggression in Psychosis: A Systematic Review and Effect Size Analysis.

    PubMed

    Widmayer, Sonja; Sowislo, Julia F; Jungfer, Hermann A; Borgwardt, Stefan; Lang, Undine E; Stieglitz, Rolf D; Huber, Christian G

    2018-01-01

    Background: Aggression in psychoses is of high clinical importance, and volumetric MRI techniques have been used to explore its structural brain correlates. Methods: We conducted a systematic review searching EMBASE, ScienceDirect, and PsycINFO through September 2017 using thesauri representing aggression, psychosis, and brain imaging. We calculated effect sizes for each study and mean Hedge's g for whole brain (WB) volume. Methodological quality was established using the PRISMA checklist (PROSPERO: CRD42014014461). Results: Our sample consisted of 12 studies with 470 patients and 155 healthy controls (HC). After subtracting subjects due to cohort overlaps, 314 patients and 96 HC remained. Qualitative analyses showed lower volumes of WB, prefrontal regions, temporal lobe, hippocampus, thalamus and cerebellum, and higher volumes of lateral ventricles, amygdala, and putamen in violent vs. non-violent people with schizophrenia. In quantitative analyses, violent persons with schizophrenia exhibited a significantly lower WB volume than HC ( p = 0.004), and also lower than non-violent persons with schizophrenia ( p = 0.007). Conclusions: We reviewed evidence for differences in brain volume correlates of aggression in persons with schizophrenia. Our results point toward a reduced whole brain volume in violent as opposed to non-violent persons with schizophrenia. However, considerable sample overlap in the literature, lack of reporting of potential confounding variables, and missing research on affective psychoses limit our explanatory power. To permit stronger conclusions, further studies evaluating structural correlates of aggression in psychotic disorders are needed.

  10. TOFSIMS-P: a web-based platform for analysis of large-scale TOF-SIMS data.

    PubMed

    Yun, So Jeong; Park, Ji-Won; Choi, Il Ju; Kang, Byeongsoo; Kim, Hark Kyun; Moon, Dae Won; Lee, Tae Geol; Hwang, Daehee

    2011-12-15

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) has been a useful tool to profile secondary ions from the near surface region of specimens with its high molecular specificity and submicrometer spatial resolution. However, the TOF-SIMS analysis of even a moderately large size of samples has been hampered due to the lack of tools for automatically analyzing the huge amount of TOF-SIMS data. Here, we present a computational platform to automatically identify and align peaks, find discriminatory ions, build a classifier, and construct networks describing differential metabolic pathways. To demonstrate the utility of the platform, we analyzed 43 data sets generated from seven gastric cancer and eight normal tissues using TOF-SIMS. A total of 87 138 ions were detected from the 43 data sets by TOF-SIMS. We selected and then aligned 1286 ions. Among them, we found the 66 ions discriminating gastric cancer tissues from normal ones. Using these 66 ions, we then built a partial least square-discriminant analysis (PLS-DA) model resulting in a misclassification error rate of 0.024. Finally, network analysis of the 66 ions showed disregulation of amino acid metabolism in the gastric cancer tissues. The results show that the proposed framework was effective in analyzing TOF-SIMS data from a moderately large size of samples, resulting in discrimination of gastric cancer tissues from normal tissues and identification of biomarker candidates associated with the amino acid metabolism.

  11. Joint pathology and behavioral performance in autoimmune MRL-lpr Mice.

    PubMed

    Sakić, B; Szechtman, H; Stead, R H; Denburg, J A

    1996-09-01

    Young autoimmune MRL-lpr mice perform more poorly than age-matched controls in tests of exploration, spatial learning, and emotional reactivity. Impaired behavioral performance coincides temporally with hyperproduction of autoantibodies, infiltration of lymphoid cells into the brain, and mild arthritic-like changes in hind paws. Although CNS mechanisms have been suggested to mediate behavioral deficits, it was not clear whether mild joint pathology significantly affected behavioral performance. Previously we observed that 11-week-old MRL-lpr mice showed a trend for disturbed performance when crossing a narrow beam. The first aim of the present study was to test the significance of this trend by increasing the sample size and, second, to examine the possibility that arthritis-like changes interfere with performance in brief locomotor tasks. For the purpose of the second goal, 18-week-old mice that differ widely in severity of joint disease were selectively taken from the population and tested in beam walking and swimming tasks. It was expected that the severity of joint inflammation would be positively correlated with the degree of locomotor impairment. The larger sample size revealed that young MRL-lpr mice perform significantly more poorly than controls on the beam-walking test, as evidenced by more foot slips and longer traversing time. However, significant correlation between joint pathology scores and measures of locomotion could not be detected. The lack of such relationship suggests that mild joint pathology does not significantly contribute to impaired performance in young, autoimmune MRL-lpr mice tested in short behavioral tasks.

  12. Variability of the raindrop size distribution at small spatial scales

    NASA Astrophysics Data System (ADS)

    Berne, A.; Jaffrain, J.

    2010-12-01

    Because of the interactions between atmospheric turbulence and cloud microphysics, the raindrop size distribution (DSD) is strongly variable in space and time. The spatial variability of the DSD at small spatial scales (below a few km) is not well documented and not well understood, mainly because of a lack of adequate measurements at the appropriate resolutions. A network of 16 disdrometers (Parsivels) has been designed and set up over EPFL campus in Lausanne, Switzerland. This network covers a typical operational weather radar pixel of 1x1 km2. The question of the significance of the variability of the DSD at such small scales is relevant for radar remote sensing of rainfall because the DSD is often assumed to be uniform within a radar sample volume and because the Z-R relationships used to convert the measured radar reflectivity Z into rain rate R are usually derived from point measurements. Thanks to the number of disdrometers, it was possible to quantify the spatial variability of the DSD at the radar pixel scale and to show that it can be significant. In this contribution, we show that the variability of the total drop concentration, of the median volume diameter and of the rain rate are significant, taking into account the sampling uncertainty associated with disdrometer measurements. The influence of this variability on the Z-R relationship can be non-negligible. Finally, the spatial structure of the DSD is quantified using a geostatistical tool, the variogram, and indicates high spatial correlation within a radar pixel.

  13. Morpho-z: improving photometric redshifts with galaxy morphology

    NASA Astrophysics Data System (ADS)

    Soo, John Y. H.; Moraes, Bruno; Joachimi, Benjamin; Hartley, William; Lahav, Ofer; Charbonnier, Aldée; Makler, Martín; Pereira, Maria E. S.; Comparat, Johan; Erben, Thomas; Leauthaud, Alexie; Shan, Huanyuan; Van Waerbeke, Ludovic

    2018-04-01

    We conduct a comprehensive study of the effects of incorporating galaxy morphology information in photometric redshift estimation. Using machine learning methods, we assess the changes in the scatter and outlier fraction of photometric redshifts when galaxy size, ellipticity, Sérsic index, and surface brightness are included in training on galaxy samples from the SDSS and the CFHT Stripe-82 Survey (CS82). We show that by adding galaxy morphological parameters to full ugriz photometry, only mild improvements are obtained, while the gains are substantial in cases where fewer passbands are available. For instance, the combination of grz photometry and morphological parameters almost fully recovers the metrics of 5-band photometric redshifts. We demonstrate that with morphology it is possible to determine useful redshift distribution N(z) of galaxy samples without any colour information. We also find that the inclusion of quasar redshifts and associated object sizes in training improves the quality of photometric redshift catalogues, compensating for the lack of a good star-galaxy separator. We further show that morphological information can mitigate biases and scatter due to bad photometry. As an application, we derive both point estimates and posterior distributions of redshifts for the official CS82 catalogue, training on morphology and SDSS Stripe-82 ugriz bands when available. Our redshifts yield a 68th percentile error of 0.058(1 + z), and a outlier fraction of 5.2 per cent. We further include a deep extension trained on morphology and single i-band CS82 photometry.

  14. Effect of Sucrose Stearate on the Sensory-Related Quality of the Broth and Porridge of Ready-To-Eat Ginseng Chicken Soup Samgyetang

    PubMed Central

    Triyannanto, Endy

    2017-01-01

    The objective of this study was to assess the sensory-related characteristics of the broth and porridge of ready-to-eat (RTE) ginseng chicken soup (Samgyetang) with sucrose stearate added at various concentrations (0.1%, 0.2%, and 0.3%) during storage at 25°C for 12 mon. Scores indicating the lightness and size of fat droplets in the broth increased during storage as the sucrose stearate concentration increased, while the clarity scores decreased until 9 mon and the taste scores decreased throughout the storage period (p<0.05). The porridge lightness increased as the concentration of sucrose stearate increased after 6 mon (p<0.05), while scores indicating the softness and vividness were higher for treated samples with sucrose stearate than for the control group after 3 mon, despite a lack of significant differences among treatment groups (p >0.05). The taste scores were lower for treated porridge samples than for the control group (p<0.05), even though no significant differences were observed among the treatment groups (p >0.05). The addition of sucrose stearate to the RTE Samgyetang broth improved the lightness (CIE L*) value of the broth and various sensory palatability parameters, including the color and fat droplet size of the broth and the softness and vividness of the porridge, despite reductions in broth clarity and taste scores for the broth and porridge during storage. PMID:29725207

  15. Effect of Sucrose Stearate on the Sensory-Related Quality of the Broth and Porridge of Ready-To-Eat Ginseng Chicken Soup Samgyetang.

    PubMed

    Triyannanto, Endy; Lee, Keun Taik

    2017-01-01

    The objective of this study was to assess the sensory-related characteristics of the broth and porridge of ready-to-eat (RTE) ginseng chicken soup ( Samgyetang ) with sucrose stearate added at various concentrations (0.1%, 0.2%, and 0.3%) during storage at 25°C for 12 mon. Scores indicating the lightness and size of fat droplets in the broth increased during storage as the sucrose stearate concentration increased, while the clarity scores decreased until 9 mon and the taste scores decreased throughout the storage period ( p <0.05). The porridge lightness increased as the concentration of sucrose stearate increased after 6 mon ( p <0.05), while scores indicating the softness and vividness were higher for treated samples with sucrose stearate than for the control group after 3 mon, despite a lack of significant differences among treatment groups ( p >0.05). The taste scores were lower for treated porridge samples than for the control group ( p <0.05), even though no significant differences were observed among the treatment groups ( p >0.05). The addition of sucrose stearate to the RTE Samgyetang broth improved the lightness (CIE L *) value of the broth and various sensory palatability parameters, including the color and fat droplet size of the broth and the softness and vividness of the porridge, despite reductions in broth clarity and taste scores for the broth and porridge during storage.

  16. Prediction of Depression in Cancer Patients With Different Classification Criteria, Linear Discriminant Analysis versus Logistic Regression.

    PubMed

    Shayan, Zahra; Mohammad Gholi Mezerji, Naser; Shayan, Leila; Naseri, Parisa

    2015-11-03

    Logistic regression (LR) and linear discriminant analysis (LDA) are two popular statistical models for prediction of group membership. Although they are very similar, the LDA makes more assumptions about the data. When categorical and continuous variables used simultaneously, the optimal choice between the two models is questionable. In most studies, classification error (CE) is used to discriminate between subjects in several groups, but this index is not suitable to predict the accuracy of the outcome. The present study compared LR and LDA models using classification indices. This cross-sectional study selected 243 cancer patients. Sample sets of different sizes (n = 50, 100, 150, 200, 220) were randomly selected and the CE, B, and Q classification indices were calculated by the LR and LDA models. CE revealed the a lack of superiority for one model over the other, but the results showed that LR performed better than LDA for the B and Q indices in all situations. No significant effect for sample size on CE was noted for selection of an optimal model. Assessment of the accuracy of prediction of real data indicated that the B and Q indices are appropriate for selection of an optimal model. The results of this study showed that LR performs better in some cases and LDA in others when based on CE. The CE index is not appropriate for classification, although the B and Q indices performed better and offered more efficient criteria for comparison and discrimination between groups.

  17. ReprDB and panDB: minimalist databases with maximal microbial representation.

    PubMed

    Zhou, Wei; Gay, Nicole; Oh, Julia

    2018-01-18

    Profiling of shotgun metagenomic samples is hindered by a lack of unified microbial reference genome databases that (i) assemble genomic information from all open access microbial genomes, (ii) have relatively small sizes, and (iii) are compatible to various metagenomic read mapping tools. Moreover, computational tools to rapidly compile and update such databases to accommodate the rapid increase in new reference genomes do not exist. As a result, database-guided analyses often fail to profile a substantial fraction of metagenomic shotgun sequencing reads from complex microbiomes. We report pipelines that efficiently traverse all open access microbial genomes and assemble non-redundant genomic information. The pipelines result in two species-resolution microbial reference databases of relatively small sizes: reprDB, which assembles microbial representative or reference genomes, and panDB, for which we developed a novel iterative alignment algorithm to identify and assemble non-redundant genomic regions in multiple sequenced strains. With the databases, we managed to assign taxonomic labels and genome positions to the majority of metagenomic reads from human skin and gut microbiomes, demonstrating a significant improvement over a previous database-guided analysis on the same datasets. reprDB and panDB leverage the rapid increases in the number of open access microbial genomes to more fully profile metagenomic samples. Additionally, the databases exclude redundant sequence information to avoid inflated storage or memory space and indexing or analyzing time. Finally, the novel iterative alignment algorithm significantly increases efficiency in pan-genome identification and can be useful in comparative genomic analyses.

  18. Literature Review of Research on Chronic Pain and Yoga in Military Populations

    PubMed Central

    Miller, Shari; Gaylord, Susan; Buben, Alex; Brintz, Carrie; Rae Olmsted, Kristine; Asefnia, Nakisa; Bartoszek, Michael

    2017-01-01

    Background: Although yoga is increasingly being provided to active duty soldiers and veterans, studies with military populations are limited and effects on chronic pain are largely unknown. We reviewed the existing body of literature and provide recommendations for future research. Methods: We conducted a literature review of electronic databases (PubMed, PsychINFO, Web of Science, Science Citation Index Expanded, Social Sciences Citation Index, Conference Proceedings Citation Index—Science, and Conference Proceedings Citation Index—Social Science & Humanities). The studies were reviewed for characteristics such as mean age of participants, sample size, yoga type, and study design. Only peer-reviewed studies were included in the review. Results: The search yielded only six studies that examined pain as an outcome of yoga for military populations. With one exception, studies were with veteran populations. Only one study was conducted with Operation Enduring Freedom (OEF) or Operation Iraqi Freedom (OIF) veterans. One study was a randomized controlled trial (RCT). Four of the five studies remaining used pre/post design, while the last study used a post-only design. Conclusions: Studies on the use of yoga to treat chronic pain in military populations are in their infancy. Methodological weaknesses include small sample sizes, a lack of studies with key groups (active duty, OEF/IEF veterans), and use of single group uncontrolled designs (pre/post; post only) for all but one study. Future research is needed to address these methodological limitations and build on this small body of literature. PMID:28930278

  19. Genetic conservation and management of the Californian endemic, Torrey Pine (Pinus torreyana Parry)

    Treesearch

    Jill A. Hamilton; Jessica W. Wright; F. Thomas Ledig

    2017-01-01

    Torrey pine (Pinus torreyana) is one of the rarest pine species in the world. Restricted to one mainland and one island population in California, Torrey pine is a species of conservation concern under threat due to low population sizes, lack of genetic variation, and environmental stochasticity. Previous research points to a lack of within population variation that is...

  20. The effect of machine learning regression algorithms and sample size on individualized behavioral prediction with functional connectivity features.

    PubMed

    Cui, Zaixu; Gong, Gaolang

    2018-06-02

    Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Neuromuscular dose-response studies: determining sample size.

    PubMed

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  2. Sample size considerations for paired experimental design with incomplete observations of continuous outcomes.

    PubMed

    Zhu, Hong; Xu, Xiaohan; Ahn, Chul

    2017-01-01

    Paired experimental design is widely used in clinical and health behavioral studies, where each study unit contributes a pair of observations. Investigators often encounter incomplete observations of paired outcomes in the data collected. Some study units contribute complete pairs of observations, while the others contribute either pre- or post-intervention observations. Statistical inference for paired experimental design with incomplete observations of continuous outcomes has been extensively studied in literature. However, sample size method for such study design is sparsely available. We derive a closed-form sample size formula based on the generalized estimating equation approach by treating the incomplete observations as missing data in a linear model. The proposed method properly accounts for the impact of mixed structure of observed data: a combination of paired and unpaired outcomes. The sample size formula is flexible to accommodate different missing patterns, magnitude of missingness, and correlation parameter values. We demonstrate that under complete observations, the proposed generalized estimating equation sample size estimate is the same as that based on the paired t-test. In the presence of missing data, the proposed method would lead to a more accurate sample size estimate comparing with the crude adjustment. Simulation studies are conducted to evaluate the finite-sample performance of the generalized estimating equation sample size formula. A real application example is presented for illustration.

  3. How Large Should a Statistical Sample Be?

    ERIC Educational Resources Information Center

    Menil, Violeta C.; Ye, Ruili

    2012-01-01

    This study serves as a teaching aid for teachers of introductory statistics. The aim of this study was limited to determining various sample sizes when estimating population proportion. Tables on sample sizes were generated using a C[superscript ++] program, which depends on population size, degree of precision or error level, and confidence…

  4. Size and modal analyses of fines and ultrafines from some Apollo 17 samples

    NASA Technical Reports Server (NTRS)

    Greene, G. M.; King, D. T., Jr.; Banholzer, G. S., Jr.; King, E. A.

    1975-01-01

    Scanning electron and optical microscopy techniques have been used to determine the grain-size frequency distributions and morphology-based modal analyses of fine and ultrafine fractions of some Apollo 17 regolith samples. There are significant and large differences between the grain-size frequency distributions of the less than 10-micron size fraction of Apollo 17 samples, but there are no clear relations to the local geologic setting from which individual samples have been collected. This may be due to effective lateral mixing of regolith particles in this size range by micrometeoroid impacts. None of the properties of the frequency distributions support the idea of selective transport of any fine grain-size fraction, as has been proposed by other workers. All of the particle types found in the coarser size fractions also occur in the less than 10-micron particles. In the size range from 105 to 10 microns there is a strong tendency for the percentage of regularly shaped glass to increase as the graphic mean grain size of the less than 1-mm size fraction decreases, both probably being controlled by exposure age.

  5. Sample size, confidence, and contingency judgement.

    PubMed

    Clément, Mélanie; Mercier, Pierre; Pastò, Luigi

    2002-06-01

    According to statistical models, the acquisition function of contingency judgement is due to confidence increasing with sample size. According to associative models, the function reflects the accumulation of associative strength on which the judgement is based. Which view is right? Thirty university students assessed the relation between a fictitious medication and a symptom of skin discoloration in conditions that varied sample size (4, 6, 8 or 40 trials) and contingency (delta P = .20, .40, .60 or .80). Confidence was also collected. Contingency judgement was lower for smaller samples, while confidence level correlated inversely with sample size. This dissociation between contingency judgement and confidence contradicts the statistical perspective.

  6. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    PubMed

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images

    NASA Astrophysics Data System (ADS)

    Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.

    2018-04-01

    Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.

  8. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    PubMed

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  9. Estimating the size of hidden populations using respondent-driven sampling data: Case examples from Morocco

    PubMed Central

    Johnston, Lisa G; McLaughlin, Katherine R; Rhilani, Houssine El; Latifi, Amina; Toufik, Abdalla; Bennani, Aziza; Alami, Kamal; Elomari, Boutaina; Handcock, Mark S

    2015-01-01

    Background Respondent-driven sampling is used worldwide to estimate the population prevalence of characteristics such as HIV/AIDS and associated risk factors in hard-to-reach populations. Estimating the total size of these populations is of great interest to national and international organizations, however reliable measures of population size often do not exist. Methods Successive Sampling-Population Size Estimation (SS-PSE) along with network size imputation allows population size estimates to be made without relying on separate studies or additional data (as in network scale-up, multiplier and capture-recapture methods), which may be biased. Results Ten population size estimates were calculated for people who inject drugs, female sex workers, men who have sex with other men, and migrants from sub-Sahara Africa in six different cities in Morocco. SS-PSE estimates fell within or very close to the likely values provided by experts and the estimates from previous studies using other methods. Conclusions SS-PSE is an effective method for estimating the size of hard-to-reach populations that leverages important information within respondent-driven sampling studies. The addition of a network size imputation method helps to smooth network sizes allowing for more accurate results. However, caution should be used particularly when there is reason to believe that clustered subgroups may exist within the population of interest or when the sample size is small in relation to the population. PMID:26258908

  10. Scale-Dependent Habitat Selection and Size-Based Dominance in Adult Male American Alligators

    PubMed Central

    Strickland, Bradley A.; Vilella, Francisco J.; Belant, Jerrold L.

    2016-01-01

    Habitat selection is an active behavioral process that may vary across spatial and temporal scales. Animals choose an area of primary utilization (i.e., home range) then make decisions focused on resource needs within patches. Dominance may affect the spatial distribution of conspecifics and concomitant habitat selection. Size-dependent social dominance hierarchies have been documented in captive alligators, but evidence is lacking from wild populations. We studied habitat selection for adult male American alligators (Alligator mississippiensis; n = 17) on the Pearl River in central Mississippi, USA, to test whether habitat selection was scale-dependent and individual resource selectivity was a function of conspecific body size. We used K-select analysis to quantify selection at the home range scale and patches within the home range to determine selection congruency and important habitat variables. In addition, we used linear models to determine if body size was related to selection patterns and strengths. Our results indicated habitat selection of adult male alligators was a scale-dependent process. Alligators demonstrated greater overall selection for habitat variables at the patch level and less at the home range level, suggesting resources may not be limited when selecting a home range for animals in our study area. Further, diurnal habitat selection patterns may depend on thermoregulatory needs. There was no relationship between resource selection or home range size and body size, suggesting size-dependent dominance hierarchies may not have influenced alligator resource selection or space use in our sample. Though apparent habitat suitability and low alligator density did not manifest in an observed dominance hierarchy, we hypothesize that a change in either could increase intraspecific interactions, facilitating a dominance hierarchy. Due to the broad and diverse ecological roles of alligators, understanding the factors that influence their social dominance and space use can provide great insight into their functional role in the ecosystem. PMID:27588947

  11. Scale-dependent habitat selection and size-based dominance in adult male American alligators

    USGS Publications Warehouse

    Strickland, Bradley A.; Vilella, Francisco; Belant, Jerrold L.

    2016-01-01

    Habitat selection is an active behavioral process that may vary across spatial and temporal scales. Animals choose an area of primary utilization (i.e., home range) then make decisions focused on resource needs within patches. Dominance may affect the spatial distribution of conspecifics and concomitant habitat selection. Size-dependent social dominance hierarchies have been documented in captive alligators, but evidence is lacking from wild populations. We studied habitat selection for adult male American alligators (Alligator mississippiensis; n = 17) on the Pearl River in central Mississippi, USA, to test whether habitat selection was scale-dependent and individual resource selectivity was a function of conspecific body size. We used K-select analysis to quantify selection at the home range scale and patches within the home range to determine selection congruency and important habitat variables. In addition, we used linear models to determine if body size was related to selection patterns and strengths. Our results indicated habitat selection of adult male alligators was a scale-dependent process. Alligators demonstrated greater overall selection for habitat variables at the patch level and less at the home range level, suggesting resources may not be limited when selecting a home range for animals in our study area. Further, diurnal habitat selection patterns may depend on thermoregulatory needs. There was no relationship between resource selection or home range size and body size, suggesting size-dependent dominance hierarchies may not have influenced alligator resource selection or space use in our sample. Though apparent habitat suitability and low alligator density did not manifest in an observed dominance hierarchy, we hypothesize that a change in either could increase intraspecific interactions, facilitating a dominance hierarchy. Due to the broad and diverse ecological roles of alligators, understanding the factors that influence their social dominance and space use can provide great insight into their functional role in the ecosystem.

  12. Sample allocation balancing overall representativeness and stratum precision.

    PubMed

    Diaz-Quijano, Fredi Alexander

    2018-05-07

    In large-scale surveys, it is often necessary to distribute a preset sample size among a number of strata. Researchers must make a decision between prioritizing overall representativeness or precision of stratum estimates. Hence, I evaluated different sample allocation strategies based on stratum size. The strategies evaluated herein included allocation proportional to stratum population; equal sample for all strata; and proportional to the natural logarithm, cubic root, and square root of the stratum population. This study considered the fact that, from a preset sample size, the dispersion index of stratum sampling fractions is correlated with the population estimator error and the dispersion index of stratum-specific sampling errors would measure the inequality in precision distribution. Identification of a balanced and efficient strategy was based on comparing those both dispersion indices. Balance and efficiency of the strategies changed depending on overall sample size. As the sample to be distributed increased, the most efficient allocation strategies were equal sample for each stratum; proportional to the logarithm, to the cubic root, to square root; and that proportional to the stratum population, respectively. Depending on sample size, each of the strategies evaluated could be considered in optimizing the sample to keep both overall representativeness and stratum-specific precision. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Effect of roll hot press temperature on crystallite size of PVDF film

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartono, Ambran, E-mail: ambranhartono@yahoo.com; Sanjaya, Edi; Djamal, Mitra

    2014-03-24

    Fabrication PVDF films have been made using Hot Roll Press. Preparation of samples carried out for nine different temperatures. This condition is carried out to see the effect of Roll Hot Press temperature on the size of the crystallite of PVDF films. To obtain the diffraction pattern of sample characterization is performed using X-Ray Diffraction. Furthermore, from the diffraction pattern is obtained, the calculation to determine the crystallite size of the sample by using the Scherrer equation. From the experimental results and the calculation of crystallite sizes obtained for the samples with temperature 130 °C up to 170 °C respectivelymore » increased from 7.2 nm up to 20.54 nm. These results show that increasing temperatures will also increase the size of the crystallite of the sample. This happens because with the increasing temperature causes the higher the degree of crystallization of PVDF film sample is formed, so that the crystallite size also increases. This condition indicates that the specific volume or size of the crystals depends on the magnitude of the temperature as it has been studied by Nakagawa.« less

  14. Reproductive strategies and seasonal changes in the somatic indices of seven small-bodied fishes in Atlantic Canada in relation to study design for environmental effects monitoring.

    PubMed

    Barrett, Timothy J; Brasfield, Sandra M; Carroll, Leslie C; Doyle, Meghan A; van den Heuvel, Michael R; Munkittrick, Kelly R

    2015-05-01

    Small-bodied fishes are more commonly being used in environmental effects monitoring (EEM) studies. There is a lack of understanding of the biological characteristics of many small-bodied species, which hinders study designs for monitoring studies. For example, 72% of fish population surveys in Canada's EEM program for pulp and paper mills that used small-bodied fishes were conducted outside of the reproductive period of the species. This resulted in an inadequate assessment of the EEM program's primary effect endpoint (reproduction) for these studies. The present study examined seasonal changes in liver size, gonad size, and condition in seven freshwater and estuarine small-bodied fishes in Atlantic Canada. These data were used to examine differences in reproductive strategies and patterns of energy storage among species. Female gonadal recrudescence in all seven species began primarily in the 2-month period in the spring before spawning. Male gonadal development was concurrent with females in five species; however, gonadal recrudescence began in the fall in male three-spined stickleback (Gasterosteus aculeatus) and slimy sculpin (Cottus cognatus). The spawning period for each species was estimated from the decline in relative ovary size after its seasonal maximum value in spring. The duration of the spawning period reflected the reproductive strategy (single vs multiple spawning) of the species. Optimal sampling periods to assess reproductive impacts in each species were determined based on seasonal changes in ovary size and were identified to be during the prespawning period when gonads are developing and variability in relative gonad size is at a minimum.

  15. Population size and trend of Yellow-billed Loons in northern Alaska

    USGS Publications Warehouse

    Earnst, Susan L.; Stehn, R.A.; Platte, Robert; Larned, W.W.; Mallek, E.J.

    2005-01-01

    The Yellow-billed Loon (Gavia adamsii) is of conservation concern due to its restricted range, small population size, specific habitat requirements, and perceived threats to its breeding and wintering habitat. Within the U.S., this species breeds almost entirely within the National Petroleum Reserve-Alaska, nearly all of which is open, or proposed to be opened, for oil development. Rigorous estimates of Yellow-billed Loon population size and trend are lacking but essential for informed conservation. We used two annual aerial waterfowl surveys, conducted 1986a??2003 and 1992a??2003, to estimate population size and trend on northern Alaskan breeding grounds. In estimating population trend, we used mixed-effects regression models to reduce bias and sampling error associated with improvement in observer skill and annual effects of spring phenology. The estimated population trend on Alaskan breeding grounds since 1986 was near 0 with an estimated annual change of a??0.9% (95% CI of a??3.6% to +1.8%). The estimated population size, averaged over the past 12 years and adjusted by a correction factor based on an intensive, lake-circling, aerial survey method, was 2221 individuals (95% CI of 1206a??3235) in early June and 3369 individuals (95% CI of 1910a??4828) in late June. Based on estimates from other studies of the proportion of loons nesting in a given year, it is likely that <1000 nesting pairs inhabit northern Alaska in most years. The highest concentration of Yellow-billed Loons occurred between the Meade and Ikpikpuk Rivers; and across all of northern Alaska, 53% of recorded sightings occurred within 12% of the area.

  16. Assessment of sampling stability in ecological applications of discriminant analysis

    USGS Publications Warehouse

    Williams, B.K.; Titus, K.

    1988-01-01

    A simulation study was undertaken to assess the sampling stability of the variable loadings in linear discriminant function analysis. A factorial design was used for the factors of multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. A review of 60 published studies and 142 individual analyses indicated that sample sizes in ecological studies often have met that requirement. However, individual group sample sizes frequently were very unequal, and checks of assumptions usually were not reported. The authors recommend that ecologists obtain group sample sizes that are at least three times as large as the number of variables measured.

  17. Update on Controls for Isolation and Quantification Methodology of Extracellular Vesicles Derived from Adipose Tissue Mesenchymal Stem Cells

    PubMed Central

    Franquesa, Marcella; Hoogduijn, Martin J.; Ripoll, Elia; Luk, Franka; Salih, Mahdi; Betjes, Michiel G. H.; Torras, Juan; Baan, Carla C.; Grinyó, Josep M.; Merino, Ana Maria

    2014-01-01

    The research field on extracellular vesicles (EV) has rapidly expanded in recent years due to the therapeutic potential of EV. Adipose tissue human mesenchymal stem cells (ASC) may be a suitable source for therapeutic EV. A major limitation in the field is the lack of standardization of the challenging techniques to isolate and characterize EV. The aim of our study was to incorporate new controls for the detection and quantification of EV derived from ASC and to analyze the applicability and limitations of the available techniques. ASC were cultured in medium supplemented with 5% of vesicles-free fetal bovine serum. The EV were isolated from conditioned medium by differential centrifugation with size filtration (0.2 μm). As a control, non-conditioned culture medium was used (control medium). To detect EV, electron microscopy, conventional flow cytometry, and western blot were used. The quantification of the EV was by total protein quantification, ExoELISA immunoassay, and Nanosight. Cytokines and growth factors in the EV samples were measured by multiplex bead array kit. The EV were detected by electron microscope. Total protein measurement was not useful to quantify EV as the control medium showed similar protein contents as the EV samples. The ExoELISA kits had technical troubles and it was not possible to quantify the concentration of exosomes in the samples. The use of Nanosight enabled quantification and size determination of the EV. It is, however, not possible to distinguish protein aggregates from EV with this method. The technologies for quantification and characterization of the EV need to be improved. In addition, we detected protein contaminants in the EV samples, which make it difficult to determine the real effect of EV in experimental models. It will be crucial in the future to optimize design novel methods for purification and characterization of EV. PMID:25374572

  18. Assessing mycoplasma contamination of cell cultures by qPCR using a set of universal primer pairs targeting a 1.5 kb fragment of 16S rRNA genes

    PubMed Central

    Jean, Audrey; Tardy, Florence; Allatif, Omran; Grosjean, Isabelle; Blanquier, Bariza

    2017-01-01

    Mycoplasmas (a generic name for Mollicutes) are a predominant bacterial contaminant of cell culture and cell derived products including viruses. This prokaryote class is characterized by very small size and lack of a cell wall. Consequently, mycoplasmas escape ultrafiltration and visualization under routine microscopic examination, hence the ease with which cells in culture can be contaminated, with routinely more than 10% of cell lines being contaminated. Mycoplasma are a formidable threat both in fundamental research by perverting a whole range of cell properties and functions and in the pharmacological use of cells and cell derived products. Although many methods have been developed, there is still a need for a sensitive, universal assay. Here is reported the development and validation of a quantitative polymerase chain reaction (qPCR) based on the amplification of a 1.5 kb fragment covering the 16S rDNA of the Mollicute class by real-time PCR using universal U1 and U8 degenerate primers. The method includes the addition of a DNA loading probe to each sample to monitor DNA extraction and the absence of PCR inhibitors in the extracted DNA, a positive mycoplasma 16S rDNA traceable reference sample to exclude any accidental contamination of an unknown sample with this reference DNA, an analysis procedure based on the examination of the melting curve and the size of the PCR amplicon, followed by quantification of the number of 16S rDNA copies (with a lower limit of 19 copies) when relevant, and, if useful, the identification of the contaminating prokaryote by sequencing. The method was validated on a collection of mycoplasma strains and by testing over 100 samples of unknown contamination status including stocks of viruses requiring biosafety level 2, 3 or 4 containments. When compared to four established methods, the m16S_qPCR technique exhibits the highest sensitivity in detecting mycoplasma contamination. PMID:28225826

  19. Network Sampling with Memory: A proposal for more efficient sampling from social networks.

    PubMed

    Mouw, Ted; Verdery, Ashton M

    2012-08-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)-the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a "List" mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a "Search" mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS.

  20. Network Sampling with Memory: A proposal for more efficient sampling from social networks

    PubMed Central

    Mouw, Ted; Verdery, Ashton M.

    2013-01-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246

Top