Sample records for risk-based sample size

  1. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    PubMed

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  2. Sample size allocation for food item radiation monitoring and safety inspection.

    PubMed

    Seto, Mayumi; Uriu, Koichiro

    2015-03-01

    The objective of this study is to identify a procedure for determining sample size allocation for food radiation inspections of more than one food item to minimize the potential risk to consumers of internal radiation exposure. We consider a simplified case of food radiation monitoring and safety inspection in which a risk manager is required to monitor two food items, milk and spinach, in a contaminated area. Three protocols for food radiation monitoring with different sample size allocations were assessed by simulating random sampling and inspections of milk and spinach in a conceptual monitoring site. Distributions of (131)I and radiocesium concentrations were determined in reference to (131)I and radiocesium concentrations detected in Fukushima prefecture, Japan, for March and April 2011. The results of the simulations suggested that a protocol that allocates sample size to milk and spinach based on the estimation of (131)I and radiocesium concentrations using the apparent decay rate constants sequentially calculated from past monitoring data can most effectively minimize the potential risks of internal radiation exposure. © 2014 Society for Risk Analysis.

  3. Potential for adult-based epidemiological studies to characterize overall cancer risks associated with a lifetime of CT scans.

    PubMed

    Shuryak, Igor; Lubin, Jay H; Brenner, David J

    2014-06-01

    Recent epidemiological studies have suggested that radiation exposure from pediatric CT scanning is associated with small excess cancer risks. However, the majority of CT scans are performed on adults, and most radiation-induced cancers appear during middle or old age, in the same age range as background cancers. Consequently, a logical next step is to investigate the effects of CT scanning in adulthood on lifetime cancer risks by conducting adult-based, appropriately designed epidemiological studies. Here we estimate the sample size required for such studies to detect CT-associated risks. This was achieved by incorporating different age-, sex-, time- and cancer type-dependent models of radiation carcinogenesis into an in silico simulation of a population-based cohort study. This approach simulated individual histories of chest and abdominal CT exposures, deaths and cancer diagnoses. The resultant sample sizes suggest that epidemiological studies of realistically sized cohorts can detect excess lifetime cancer risks from adult CT exposures. For example, retrospective analysis of CT exposure and cancer incidence data from a population-based cohort of 0.4 to 1.3 million (depending on the carcinogenic model) CT-exposed UK adults, aged 25-65 in 1980 and followed until 2015, provides 80% power for detecting cancer risks from chest and abdominal CT scans.

  4. HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE

    NASA Technical Reports Server (NTRS)

    De, Salvo L. J.

    1994-01-01

    HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.

  5. Differential risk of injury in child occupants by passenger car classification.

    PubMed

    Kallan, Michael J; Durbin, Dennis R; Elliott, Michael R; Menon, Rajiv A; Winston, Flaura K

    2003-01-01

    In the United States, passenger cars are the most common passenger vehicle, yet they vary widely in size and crashworthiness. Using data collected from a population-based sample of crashes in State Farm-insured vehicles, we quantified the risk of injury to child occupants by passenger car size and classification. Injury risk is predicted by vehicle weight; however, there is an increased risk in both Large vs. Luxury and Sports vs. Small cars, despite similar average vehicle weights in both comparisons. Parents who are purchasing passenger cars should strongly consider the size of the vehicle and its crashworthiness.

  6. Differential Risk of Injury in Child Occupants by Passenger Car Classification

    PubMed Central

    Kallan, Michael J.; Durbin, Dennis R.; Elliott, Michael R.; Menon, Rajiv A.; Winston, Flaura K.

    2003-01-01

    In the United States, passenger cars are the most common passenger vehicle, yet they vary widely in size and crashworthiness. Using data collected from a population-based sample of crashes in State Farm-insured vehicles, we quantified the risk of injury to child occupants by passenger car size and classification. Injury risk is predicted by vehicle weight; however, there is an increased risk in both Large vs. Luxury and Sports vs. Small cars, despite similar average vehicle weights in both comparisons. Parents who are purchasing passenger cars should strongly consider the size of the vehicle and its crashworthiness. PMID:12941234

  7. Differential Risk of Injury to Child Occupants by SUV Size

    PubMed Central

    Kallan, Michael J.; Durbin, Dennis R.; Elliott, Michael R.; Arbogast, Kristy B.; Winston, Flaura K.

    2004-01-01

    In the United States, the sport utility vehicle (SUV) is the fastest growing segment of the passenger vehicle fleet, yet SUVs vary widely in size and crashworthiness. Using data collected from a population-based sample of crashes in insured vehicles, we quantified the risk of injury to child occupants in SUVs by vehicle weight. There is an increased risk in both Small and Midsize SUVs when compared to Large SUVs. Parents who are purchasing a SUV should strongly consider the size of the vehicle and its crashworthiness. PMID:15319119

  8. Designing a multiple dependent state sampling plan based on the coefficient of variation.

    PubMed

    Yan, Aijun; Liu, Sanyang; Dong, Xiaojuan

    2016-01-01

    A multiple dependent state (MDS) sampling plan is developed based on the coefficient of variation of the quality characteristic which follows a normal distribution with unknown mean and variance. The optimal plan parameters of the proposed plan are solved by a nonlinear optimization model, which satisfies the given producer's risk and consumer's risk at the same time and minimizes the sample size required for inspection. The advantages of the proposed MDS sampling plan over the existing single sampling plan are discussed. Finally an example is given to illustrate the proposed plan.

  9. The Risk of Adverse Impact in Selections Based on a Test with Known Effect Size

    ERIC Educational Resources Information Center

    De Corte, Wilfried; Lievens, Filip

    2005-01-01

    The authors derive the exact sampling distribution function of the adverse impact (AI) ratio for single-stage, top-down selections using tests with known effect sizes. Subsequently, it is shown how this distribution function can be used to determine the risk that a future selection decision on the basis of such tests will result in an outcome that…

  10. Can mindfulness-based interventions influence cognitive functioning in older adults? A review and considerations for future research.

    PubMed

    Berk, Lotte; van Boxtel, Martin; van Os, Jim

    2017-11-01

    An increased need exists to examine factors that protect against age-related cognitive decline. There is preliminary evidence that meditation can improve cognitive function. However, most studies are cross-sectional and examine a wide variety of meditation techniques. This review focuses on the standard eight-week mindfulness-based interventions (MBIs) such as mindfulness-based stress reduction (MBSR) and mindfulness-based cognitive therapy (MBCT). We searched the PsychINFO, CINAHL, Web of Science, COCHRANE, and PubMed databases to identify original studies investigating the effects of MBI on cognition in older adults. Six reports were included in the review of which three were randomized controlled trials. Studies reported preliminary positive effects on memory, executive function and processing speed. However, most reports had a high risk of bias and sample sizes were small. The only study with low risk of bias, large sample size and active control group reported no significant findings. We conclude that eight-week MBI for older adults are feasible, but results on cognitive improvement are inconclusive due a limited number of studies, small sample sizes, and a high risk of bias. Rather than a narrow focus on cognitive training per se, future research may productively shift to investigate MBI as a tool to alleviate suffering in older adults, and to prevent cognitive problems in later life already in younger target populations.

  11. Policy-driven development of cost-effective, risk-based surveillance strategies.

    PubMed

    Reist, M; Jemmi, T; Stärk, K D C

    2012-07-01

    Animal health and residue surveillance verifies the good health status of the animal population, thereby supporting international free trade of animals and animal products. However, active surveillance is costly and time-consuming. The development of cost-effective tools for animal health and food hazard surveillance is therefore a priority for decision-makers in the field of veterinary public health. The assumption of this paper is that outcome-based formulation of standards, legislation leaving room for risk-based approaches and close collaboration and a mutual understanding and exchange between scientists and policy makers are essential for cost-effective surveillance. We illustrate this using the following examples: (i) a risk-based sample size calculation for surveys to substantiate freedom from diseases/infection, (ii) a cost-effective national surveillance system for Bluetongue using scenario tree modelling and (iii) a framework for risk-based residue monitoring. Surveys to substantiate freedom from infectious bovine rhinotracheitis and enzootic bovine leucosis between 2002 and 2009 saved over 6 million € by applying a risk-based sample size calculation approach, and by taking into account prior information from repeated surveys. An open, progressive policy making process stimulates research and science to develop risk-based and cost-efficient survey methodologies. Early involvement of policy makers in scientific developments facilitates implementation of new findings and full exploitation of benefits for producers and consumers. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Risk Factors for Addiction and Their Association with Model-Based Behavioral Control.

    PubMed

    Reiter, Andrea M F; Deserno, Lorenz; Wilbertz, Tilmann; Heinze, Hans-Jochen; Schlagenhauf, Florian

    2016-01-01

    Addiction shows familial aggregation and previous endophenotype research suggests that healthy relatives of addicted individuals share altered behavioral and cognitive characteristics with individuals suffering from addiction. In this study we asked whether impairments in behavioral control proposed for addiction, namely a shift from goal-directed, model-based toward habitual, model-free control, extends toward an unaffected sample (n = 20) of adult children of alcohol-dependent fathers as compared to a sample without any personal or family history of alcohol addiction (n = 17). Using a sequential decision-making task designed to investigate model-free and model-based control combined with a computational modeling analysis, we did not find any evidence for altered behavioral control in individuals with a positive family history of alcohol addiction. Independent of family history of alcohol dependence, we however observed that the interaction of two different risk factors of addiction, namely impulsivity and cognitive capacities, predicts the balance of model-free and model-based behavioral control. Post-hoc tests showed a positive association of model-based behavior with cognitive capacity in the lower, but not in the higher impulsive group of the original sample. In an independent sample of particularly high- vs. low-impulsive individuals, we confirmed the interaction effect of cognitive capacities and high vs. low impulsivity on model-based control. In the confirmation sample, a positive association of omega with cognitive capacity was observed in highly impulsive individuals, but not in low impulsive individuals. Due to the moderate sample size of the study, further investigation of the association of risk factors for addiction with model-based behavior in larger sample sizes is warranted.

  13. Support vector regression to predict porosity and permeability: Effect of sample size

    NASA Astrophysics Data System (ADS)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.

  14. Predictive accuracy of combined genetic and environmental risk scores.

    PubMed

    Dudbridge, Frank; Pashayan, Nora; Yang, Jian

    2018-02-01

    The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. © 2017 WILEY PERIODICALS, INC.

  15. Predictive accuracy of combined genetic and environmental risk scores

    PubMed Central

    Pashayan, Nora; Yang, Jian

    2017-01-01

    ABSTRACT The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. PMID:29178508

  16. Concepts for risk-based surveillance in the field of veterinary medicine and veterinary public health: Review of current approaches

    PubMed Central

    Stärk, Katharina DC; Regula, Gertraud; Hernandez, Jorge; Knopf, Lea; Fuchs, Klemens; Morris, Roger S; Davies, Peter

    2006-01-01

    Background Emerging animal and zoonotic diseases and increasing international trade have resulted in an increased demand for veterinary surveillance systems. However, human and financial resources available to support government veterinary services are becoming more and more limited in many countries world-wide. Intuitively, issues that present higher risks merit higher priority for surveillance resources as investments will yield higher benefit-cost ratios. The rapid rate of acceptance of this core concept of risk-based surveillance has outpaced the development of its theoretical and practical bases. Discussion The principal objectives of risk-based veterinary surveillance are to identify surveillance needs to protect the health of livestock and consumers, to set priorities, and to allocate resources effectively and efficiently. An important goal is to achieve a higher benefit-cost ratio with existing or reduced resources. We propose to define risk-based surveillance systems as those that apply risk assessment methods in different steps of traditional surveillance design for early detection and management of diseases or hazards. In risk-based designs, public health, economic and trade consequences of diseases play an important role in selection of diseases or hazards. Furthermore, certain strata of the population of interest have a higher probability to be sampled for detection of diseases or hazards. Evaluation of risk-based surveillance systems shall prove that the efficacy of risk-based systems is equal or higher than traditional systems; however, the efficiency (benefit-cost ratio) shall be higher in risk-based surveillance systems. Summary Risk-based surveillance considerations are useful to support both strategic and operational decision making. This article highlights applications of risk-based surveillance systems in the veterinary field including food safety. Examples are provided for risk-based hazard selection, risk-based selection of sampling strata as well as sample size calculation based on risk considerations. PMID:16507106

  17. Evaluation of type 2 diabetes genetic risk variants in Chinese adults: findings from 93,000 individuals from the China Kadoorie Biobank.

    PubMed

    Gan, Wei; Walters, Robin G; Holmes, Michael V; Bragg, Fiona; Millwood, Iona Y; Banasik, Karina; Chen, Yiping; Du, Huaidong; Iona, Andri; Mahajan, Anubha; Yang, Ling; Bian, Zheng; Guo, Yu; Clarke, Robert J; Li, Liming; McCarthy, Mark I; Chen, Zhengming

    2016-07-01

    Genome-wide association studies (GWAS) have discovered many risk variants for type 2 diabetes. However, estimates of the contributions of risk variants to type 2 diabetes predisposition are often based on highly selected case-control samples, and reliable estimates of population-level effect sizes are missing, especially in non-European populations. The individual and cumulative effects of 59 established type 2 diabetes risk loci were measured in a population-based China Kadoorie Biobank (CKB) study of 93,000 Chinese adults, including >7,100 diabetes cases. Association signals were directionally consistent between CKB and the original discovery GWAS: of 56 variants passing quality control, 48 showed the same direction of effect (binomial test, p = 2.3 × 10(-8)). We observed a consistent overall trend towards lower risk variant effect sizes in CKB than in case-control samples of GWAS meta-analyses (mean 19-22% decrease in log odds, p ≤ 0.0048), likely to reflect correction of both 'winner's curse' and spectrum bias effects. The association with risk of diabetes of a genetic risk score, based on lead variants at 25 loci considered to act through beta cell function, demonstrated significant interactions with several measures of adiposity (BMI, waist circumference [WC], WHR and percentage body fat [PBF]; all p interaction < 1 × 10(-4)), with a greater effect being observed in leaner adults. Our study provides further evidence of shared genetic architecture for type 2 diabetes between Europeans and East Asians. It also indicates that even very large GWAS meta-analyses may be vulnerable to substantial inflation of effect size estimates, compared with those observed in large-scale population-based cohort studies. Details of how to access China Kadoorie Biobank data and details of the data release schedule are available from www.ckbiobank.org/site/Data+Access .

  18. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    PubMed

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.

  19. Benefits of a hospital-based peer intervention program for violently injured youth.

    PubMed

    Shibru, Daniel; Zahnd, Elaine; Becker, Marla; Bekaert, Nic; Calhoun, Deane; Victorino, Gregory P

    2007-11-01

    Exposure to violence predisposes youths to future violent behavior. Breaking the cycle of violence in inner cities is the primary objective of hospital-based violence intervention and prevention programs. An evaluation was undertaken to determine if a hospital-based, peer intervention program, "Caught in the Crossfire," reduces the risk of criminal justice involvement, decreases hospitalizations from traumatic reinjury, diminishes death from intentional violent trauma, and is cost effective. We designed a retrospective cohort study conducted between January 1998 and June 2003 at a university-based urban trauma center. The duration of followup was 18 months. Patients were 12 to 20 years of age and were hospitalized for intentional violent trauma. The "enrolled" group had a minimum of five interactions with an intervention specialist. The control group was selected from the hospital database by matching age, gender, race or ethnicity, type of injury, and year of admission. All patients came from socioeconomically disadvantaged areas. The total sample size was 154 patients. Participation in the hospital-based peer intervention program lowered the risk of criminal justice involvement (relative risk=0.67; 95% CI, 0.45, 0.99; p=0.04). There was no effect on risks of reinjury and death. Subsequent violent criminal behavior was reduced by 7% (p=0.15). Logistic regression analysis showed age had a confounding effect on the association between program participation and criminal justice involvement (relative risk=0.71; p=0.043). When compared with juvenile detention center costs, the total cost reduction derived from the intervention program annually was $750,000 to $1.5 million. This hospital-based peer intervention program reduces the risk of criminal justice system involvement, is more effective with younger patients, and is cost effective. Any effect on reinjury and death will require a larger sample size and longer followup.

  20. Modified Toxicity Probability Interval Design: A Safer and More Reliable Method Than the 3 + 3 Design for Practical Phase I Trials

    PubMed Central

    Ji, Yuan; Wang, Sue-Jane

    2013-01-01

    The 3 + 3 design is the most common choice among clinicians for phase I dose-escalation oncology trials. In recent reviews, more than 95% of phase I trials have been based on the 3 + 3 design. Given that it is intuitive and its implementation does not require a computer program, clinicians can conduct 3 + 3 dose escalations in practice with virtually no logistic cost, and trial protocols based on the 3 + 3 design pass institutional review board and biostatistics reviews quickly. However, the performance of the 3 + 3 design has rarely been compared with model-based designs in simulation studies with matched sample sizes. In the vast majority of statistical literature, the 3 + 3 design has been shown to be inferior in identifying true maximum-tolerated doses (MTDs), although the sample size required by the 3 + 3 design is often orders-of-magnitude smaller than model-based designs. In this article, through comparative simulation studies with matched sample sizes, we demonstrate that the 3 + 3 design has higher risks of exposing patients to toxic doses above the MTD than the modified toxicity probability interval (mTPI) design, a newly developed adaptive method. In addition, compared with the mTPI design, the 3 + 3 design does not yield higher probabilities in identifying the correct MTD, even when the sample size is matched. Given that the mTPI design is equally transparent, costless to implement with free software, and more flexible in practical situations, we highly encourage its adoption in early dose-escalation studies whenever the 3 + 3 design is also considered. We provide free software to allow direct comparisons of the 3 + 3 design with other model-based designs in simulation studies with matched sample sizes. PMID:23569307

  1. Four hundred or more participants needed for stable contingency table estimates of clinical prediction rule performance.

    PubMed

    Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan

    2017-02-01

    To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments

    PubMed Central

    2013-01-01

    Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725

  3. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    PubMed

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  4. Monitoring of bioaerosol inhalation risks in different environments using a six-stage Andersen sampler and the PCR-DGGE method.

    PubMed

    Xu, Zhenqiang; Yao, Maosheng

    2013-05-01

    Increasing evidences show that inhalation of indoor bioaerosols has caused numerous adverse health effects and diseases. However, the bioaerosol size distribution, composition, and concentration level, representing different inhalation risks, could vary with different living environments. The six-stage Andersen sampler is designed to simulate the sampling of different human lung regions. Here, the sampler was used in investigating the bioaerosol exposure in six different environments (student dorm, hospital, laboratory, hotel room, dining hall, and outdoor environment) in Beijing. During the sampling, the Andersen sampler was operated for 30 min for each sample, and three independent experiments were performed for each of the environments. The air samples collected onto each of the six stages of the sampler were incubated on agar plates directly at 26 °C, and the colony forming units (CFU) were manually counted and statistically corrected. In addition, the developed CFUs were washed off the agar plates and subjected to polymerase chain reaction (PCR)-denaturing gradient gel electrophoresis (DGGE) for diversity analysis. Results revealed that for most environments investigated, the culturable bacterial aerosol concentrations were higher than those of culturable fungal aerosols. The culturable bacterial and fungal aerosol fractions, concentration, size distribution, and diversity were shown to vary significantly with the sampling environments. PCR-DGGE analysis indicated that different environments had different culturable bacterial aerosol compositions as revealed by distinct gel band patterns. For most environments tested, larger (>3 μm) culturable bacterial aerosols with a skewed size distribution were shown to prevail, accounting for more than 60 %, while for culturable fungal aerosols with a normal size distribution, those 2.1-4.7 μm dominated, accounting for 20-40 %. Alternaria, Cladosporium, Chaetomium, and Aspergillus were found abundant in most environments studied here. Viable microbial load per unit of particulate matter was also shown to vary significantly with the sampling environments. The results from this study suggested that different environments even with similar levels of total microbial culturable aerosol concentrations could present different inhalation risks due to different bioaerosol particle size distribution and composition. This work fills literature gaps regarding bioaerosol size and composition-based exposure risks in different human dwellings in contrast to a vast body of total bioaerosol levels.

  5. The Effect of a School-Based Transitional Support Intervention Program on Alternative School Youth's Attitudes and Behaviors

    ERIC Educational Resources Information Center

    Kelchner, Viki P.; Evans, Kathy; Brendell, Kathrene; Allen, Danielle; Miller, Cassandre; Cooper-Haber, Karen

    2017-01-01

    This investigation examined the potential impact of a school-based youth intervention program on the attitudes and behavioral patterns of at-risk youth. The sample size used in this study was 52; 24 participants received the school-based intervention and 28 participants did not receive the intervention. A two-group pretest-posttest design approach…

  6. Pacific Educational Research Journal, 1998.

    ERIC Educational Resources Information Center

    Berg, Kathleen F.; Lai, Morris K.

    1998-01-01

    Articles in this issue vary widely in method, content, and sample size, but come together to produce a valuable collection of knowledge about education in the Pacific Basin, with emphasis on issues of under-representation. The articles are: (1) "Effects of a Culturally Competent School-Based Intervention for At-Risk Hawaiian Students"…

  7. Derivation of a Provisional, Age-dependent, AIS2+ Thoracic Risk Curve for the THOR50 Test Dummy via Integration of NASS Cases, PMHS Tests, and Simulation Data.

    PubMed

    Laituri, Tony R; Henry, Scott; El-Jawahri, Raed; Muralidharan, Nirmal; Li, Guosong; Nutt, Marvin

    2015-11-01

    A provisional, age-dependent thoracic risk equation (or, "risk curve") was derived to estimate moderate-to-fatal injury potential (AIS2+), pertaining to men with responses gaged by the advanced mid-sized male test dummy (THOR50). The derivation involved two distinct data sources: cases from real-world crashes (e.g., the National Automotive Sampling System, NASS) and cases involving post-mortem human subjects (PMHS). The derivation was therefore more comprehensive, as NASS datasets generally skew towards younger occupants, and PMHS datasets generally skew towards older occupants. However, known deficiencies had to be addressed (e.g., the NASS cases had unknown stimuli, and the PMHS tests required transformation of known stimuli into THOR50 stimuli). For the NASS portion of the analysis, chest-injury outcomes for adult male drivers about the size of the THOR50 were collected from real-world, 11-1 o'clock, full-engagement frontal crashes (NASS, 1995-2012 calendar years, 1985-2012 model-year light passenger vehicles). The screening for THOR50-sized men involved application of a set of newly-derived "correction" equations for self-reported height and weight data in NASS. Finally, THOR50 stimuli were estimated via field simulations involving attendant representative restraint systems, and those stimuli were then assigned to corresponding NASS cases (n=508). For the PMHS portion of the analysis, simulation-based closure equations were developed to convert PMHS stimuli into THOR50 stimuli. Specifically, closure equations were derived for the four measurement locations on the THOR50 chest by cross-correlating the results of matched-loading simulations between the test dummy and the age-dependent, Ford Human Body Model. The resulting closure equations demonstrated acceptable fidelity (n=75 matched simulations, R2≥0.99). These equations were applied to the THOR50-sized men in the PMHS dataset (n=20). The NASS and PMHS datasets were combined and subjected to survival analysis with event-frequency weighting and arbitrary censoring. The resulting risk curve--a function of peak THOR50 chest compression and age--demonstrated acceptable fidelity for recovering the AIS2+ chest injury rate of the combined dataset (i.e., IR_dataset=1.97% vs. curve-based IR_dataset=1.98%). Additional sensitivity analyses showed that (a) binary logistic regression yielded a risk curve with nearly-identical fidelity, (b) there was only a slight advantage of combining the small-sample PMHS dataset with the large-sample NASS dataset, (c) use of the PMHS-based risk curve for risk estimation of the combined dataset yielded relatively poor performance (194% difference), and (d) when controlling for the type of contact (lab-consistent or not), the resulting risk curves were similar.

  8. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    PubMed

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  9. Watch Out for Your Neighbor: Climbing onto Shrubs Is Related to Risk of Cannibalism in the Scorpion Buthus cf. occitanus

    PubMed Central

    Urbano-Tenorio, Fernando

    2016-01-01

    The distribution and behavior of foraging animals usually imply a balance between resource availability and predation risk. In some predators such as scorpions, cannibalism constitutes an important mortality factor determining their ecology and behavior. Climbing on vegetation by scorpions has been related both to prey availability and to predation (cannibalism) risk. We tested different hypotheses proposed to explain climbing on vegetation by scorpions. We analyzed shrub climbing in Buthus cf. occitanus with regard to the following: a) better suitability of prey size for scorpions foraging on shrubs than on the ground, b) selection of shrub species with higher prey load, c) seasonal variations in prey availability on shrubs, and d) whether or not cannibalism risk on the ground increases the frequency of shrub climbing. Prey availability on shrubs was compared by estimating prey abundance in sticky traps placed in shrubs. A prey sample from shrubs was measured to compare prey size. Scorpions were sampled in six plots (50 m x 10 m) to estimate the proportion of individuals climbing on shrubs. Size difference and distance between individuals and their closest scorpion neighbor were measured to assess cannibalism risk. The results showed that mean prey size was two-fold larger on the ground. Selection of particular shrub species was not related to prey availability. Seasonal variations in the number of scorpions on shrubs were related to the number of active scorpions, but not with fluctuations in prey availability. Size differences between a scorpion and its nearest neighbor were positively related with a higher probability for a scorpion to climb onto a shrub when at a disadvantage, but distance was not significantly related. These results do not support hypotheses explaining shrub climbing based on resource availability. By contrast, our results provide evidence that shrub climbing is related to cannibalism risk. PMID:27655347

  10. Watch Out for Your Neighbor: Climbing onto Shrubs Is Related to Risk of Cannibalism in the Scorpion Buthus cf. occitanus.

    PubMed

    Sánchez-Piñero, Francisco; Urbano-Tenorio, Fernando

    The distribution and behavior of foraging animals usually imply a balance between resource availability and predation risk. In some predators such as scorpions, cannibalism constitutes an important mortality factor determining their ecology and behavior. Climbing on vegetation by scorpions has been related both to prey availability and to predation (cannibalism) risk. We tested different hypotheses proposed to explain climbing on vegetation by scorpions. We analyzed shrub climbing in Buthus cf. occitanus with regard to the following: a) better suitability of prey size for scorpions foraging on shrubs than on the ground, b) selection of shrub species with higher prey load, c) seasonal variations in prey availability on shrubs, and d) whether or not cannibalism risk on the ground increases the frequency of shrub climbing. Prey availability on shrubs was compared by estimating prey abundance in sticky traps placed in shrubs. A prey sample from shrubs was measured to compare prey size. Scorpions were sampled in six plots (50 m x 10 m) to estimate the proportion of individuals climbing on shrubs. Size difference and distance between individuals and their closest scorpion neighbor were measured to assess cannibalism risk. The results showed that mean prey size was two-fold larger on the ground. Selection of particular shrub species was not related to prey availability. Seasonal variations in the number of scorpions on shrubs were related to the number of active scorpions, but not with fluctuations in prey availability. Size differences between a scorpion and its nearest neighbor were positively related with a higher probability for a scorpion to climb onto a shrub when at a disadvantage, but distance was not significantly related. These results do not support hypotheses explaining shrub climbing based on resource availability. By contrast, our results provide evidence that shrub climbing is related to cannibalism risk.

  11. Estimating Children’s Soil/Dust Ingestion Rates through Retrospective Analyses of Blood Lead Biomonitoring from the Bunker Hill Superfund Site in Idaho

    EPA Science Inventory

    Background: Soil/dust ingestion rates are important variables in assessing children’s health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study du...

  12. THE CHALLENGE OF DETECTING CLASSICAL SWINE FEVER VIRUS CIRCULATION IN WILD BOAR (SUS SCROFA): SIMULATION OF SAMPLING OPTIONS.

    PubMed

    Sonnenburg, Jana; Schulz, Katja; Blome, Sandra; Staubach, Christoph

    2016-10-01

    Classical swine fever (CSF) is one of the most important viral diseases of domestic pigs ( Sus scrofa domesticus) and wild boar ( Sus scrofa ). For at least 4 decades, several European Union member states were confronted with outbreaks among wild boar and, as it had been shown that infected wild boar populations can be a major cause of primary outbreaks in domestic pigs, strict control measures for both species were implemented. To guarantee early detection and to demonstrate freedom from disease, intensive surveillance is carried out based on a hunting bag sample. In this context, virologic investigations play a major role in the early detection of new introductions and in regions immunized with a conventional vaccine. The required financial resources and personnel for reliable testing are often large, and sufficient sample sizes to detect low virus prevalences are difficult to obtain. We conducted a simulation to model the possible impact of changes in sample size and sampling intervals on the probability of CSF virus detection based on a study area of 65 German hunting grounds. A 5-yr period with 4,652 virologic investigations was considered. Results suggest that low prevalences could not be detected with a justifiable effort. The simulation of increased sample sizes per sampling interval showed only a slightly better performance but would be unrealistic in practice, especially outside the main hunting season. Further studies on other approaches such as targeted or risk-based sampling for virus detection in connection with (marker) antibody surveillance are needed.

  13. Is family size related to adolescence mental hospitalization?

    PubMed

    Kylmänen, Paula; Hakko, Helinä; Räsänen, Pirkko; Riala, Kaisa

    2010-05-15

    The aim of this study was to investigate the association between family size and psychiatric disorders of underage adolescent psychiatric inpatients. The study sample consisted of 508 adolescents (age 12-17) admitted to psychiatric impatient care between April 2001 and March 2006. Diagnostic and Statistical Manual of Mental Disorders, fourth edition-based psychiatric diagnoses and variables measuring family size were obtained from the Schedule for Affective Disorder and Schizophrenia for School-Age Children Present and Lifetime (K-SADS-PL). The family size of the general Finnish population was used as a reference population. There was a significant difference between the family size of the inpatient adolescents and the general population: 17.0% of adolescents came from large families (with 6 or more children) while the percentage in the general population was 3.3. A girl from a large family had an about 4-fold risk of psychosis other than schizophrenia. However, large family size was not associated with a risk for schizophrenia. Large family size was overrepresented among underage adolescents admitted for psychiatric hospitalization in Northern Finland. Copyright 2009 Elsevier Ltd. All rights reserved.

  14. Risk of Mycobacterium tuberculosis infection in Somalia: national tuberculin survey 2006.

    PubMed

    Munim, A; Rajab, Y; Barker, A; Daniel, M; Williams, B

    2008-01-01

    To estimate the annual risk of tuberculosis (TB) infection (ARTI) in Somalia a tuberculin survey was conducted in February/March 2006. Stratified cluster sampling was carried out within the 18 regions and 101 randomly selected primary schools. Tuberculin testing was done in 10 680 grade 1 schoolchildren. Transverse tuberculin reaction size was measured 72 hours later. The number of children with a satisfactory test read was 10 364. The overall BCG coverage was 54%. Based on frequency distribution of tuberculin reaction sizes, the ARTI in Somalia was estimated at 2.2% (confidence interval: 1.5%-3.2%). There was an annual decline of 2.6% comparing with a previous study in 1956.

  15. Impact of particle size on distribution and human exposure of flame retardants in indoor dust.

    PubMed

    He, Rui-Wen; Li, Yun-Zi; Xiang, Ping; Li, Chao; Cui, Xin-Yi; Ma, Lena Q

    2018-04-01

    The effect of dust particle size on the distribution and bioaccessibility of flame retardants (FRs) in indoor dust remains unclear. In this study, we analyzed 20 FRs (including 6 organophosphate flame retardants (OPFRs), 8 polybrominated diphenyl ethers (PBDEs), 4 novel brominated flame retardants (NBFRs), and 2 dechlorane plus (DPs)) in composite dust samples from offices, public microenvironments (PME), and cars in Nanjing, China. Each composite sample (one per microenvironment) was separated into 6 size fractions (F1-F6: 200-2000µm, 150-200µm, 100-150µm, 63-100µm, 43-63µm, and <43µm). FRs concentrations were the highest in car dust, being 16 and 6 times higher than those in offices and PME. The distribution of FRs in different size fractions was Kow-dependent and affected by surface area (Log Kow=1-4), total organic carbon (Log Kow=4-9), and FR migration pathways into dust (Log Kow>9). Bioaccessibility of FRs was measured by the physiologically-based extraction test, with OPFR bioaccessibility being 1.8-82% while bioaccessible PBDEs, NBFRs, and DPs were under detection limits due to their high hydrophobicity. The OPFR bioaccessibility in 200-2000µm fraction was significantly higher than that of <43µm fraction, but with no difference among the other four fractions. Risk assessment was performed for the most abundant OPFR-tris(2-chloroethyl) phosphate. The average daily dose (ADD) values were the highest for the <43µm fraction for all three types of dust using total concentrations, but no consistent trend was found among the three types of dust if based on bioaccessible concentrations. Our results indicated that dust size impacted human exposure estimation of FRs due to their variability in distribution and bioaccessibility among different fractions. For future risk assessment, size selection for dust sampling should be standardized and bioaccessibility of FRs should not be overlooked. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. A combined Settling Tube-Photometer for rapid measurement of effective sediment particle size

    NASA Astrophysics Data System (ADS)

    Kuhn, Nikolaus J.; Kuhn, Brigitte; Rüegg, Hans-Rudolf; Zimmermann, Lukas

    2017-04-01

    Sediment and its movement in water is commonly described based on the size distribution of the mineral particles forming the sediment. While this approach works for coarse sand, pebbles and gravel, smaller particles often form aggregates, creating material of larger diameters than the mineral grain size distribution indicates, but lower densities than often assumed 2.65 g cm-3 of quartz. The measurement of the actual size and density of such aggregated sediment is difficult. For the assessment of sediment movement an effective particle size for the use in mathematical can be derived based on the settling velocity of sediment. Settling velocity of commonly measured in settling tubes which fractionate the sample in settling velocity classes by sampling material at the base in selected time intervals. This process takes up to several hours, requires a laboratory setting and carries the risk of either destruction of aggregates during transport or coagulation while sitting in rather still water. Measuring the velocity of settling particles in situ, or at least a rapidly after collection, could avoids these problems. In this study, a settling tube equipped with four photometers used to measure the darkening of a settling particle cloud is presented and the potential to improve the measurement of settling velocities are discussed.

  17. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    PubMed

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  18. Obesity and vehicle type as risk factors for injury caused by motor vehicle collision.

    PubMed

    Donnelly, John P; Griffin, Russell Lee; Sathiakumar, Nalini; McGwin, Gerald

    2014-04-01

    This study sought to describe variations in the risk of motor vehicle collision (MVC) injury and death by occupant body mass index (BMI) class and vehicle type. We hypothesized that the relationship between BMI and the risk of MVC injury or mortality would be modified by vehicle type. This is a retrospective cohort study of occupants involved in MVCs using data from the Crash Injury Research and Engineering Network and the National Automotive Sampling System Crashworthiness Data System. Occupants were grouped based on vehicle body style (passenger car, sport utility vehicle, or light truck) and vehicle size (compact or normal, corresponding to below- or above-average curb weight). The relationship between occupant BMI class (underweight, normal weight, overweight, or obese) and risk of injury or mortality was examined for each vehicle type. Odds ratios (ORs) adjusted for various occupant and collision characteristics were estimated. Of an estimated 44 million occupants of MVCs sampled from 2000 to 2009, 37.1% sustained an injury. We limited our analysis to injuries achieving an Abbreviated Injury Scale (AIS) score of 2 or more severe, totaling 17 million injuries. Occupants differed substantially in terms of demographic and collision characteristics. After adjustment for confounding factors, we found that obesity was a risk factor for mortality caused by MVC (OR, 1.6; 95% confidence interval [CI], 1.2-2.0). When stratified by vehicle type, we found that obesity was a risk factor for mortality in larger vehicles, including any-sized light trucks (OR, 2.1; 95% CI, 1.3-3.5), normal-sized passenger cars (OR, 1.6; 95% CI, 1.1-2.3), and normal-sized sports utility vehicles or vans (OR, 2.0; 95% CI, 1.0-3.8). Being overweight was a risk factor in any-sized light trucks (OR, 1.5; 95% CI, 1.1-2.1). We identified a significant interaction between occupant BMI class and vehicle type in terms of MVC-related mortality risk. Both factors should be taken into account when considering occupant safety, and additional study is needed to determine underlying causes of the observed relationships. Epidemiologic study, level III.

  19. Analysis of five-year trends in self-reported language preference and issues of item non-response among Hispanic persons in a large cross-sectional health survey: implications for the measurement of an ethnic minority population

    PubMed Central

    2010-01-01

    Background Significant differences in health outcomes have been documented among Hispanic persons, the fastest-growing demographic segment of the United States. The objective of this study was to examine trends in population growth and the collection of health data among Hispanic persons, including issues of language preference and survey completion using a national health survey to highlight issues of measurement of an increasingly important demographic segment of the United States. Design Data from the 2003-2007 United States Census and the Behavioral Risk Factor Surveillance System were used to compare trends in population growth and survey sample size as well as differences in survey response based on language preference among a Hispanic population. Percentages of item non-response on selected survey questions were compared for Hispanic respondents choosing to complete the survey in Spanish and those choosing to complete the survey in English. The mean number of attempts to complete the survey was also compared based on language preference among Hispanic respondents. Results The sample size of Hispanic persons in the Behavioral Risk Factor Surveillance System saw little growth compared to the actual growth of the Hispanic population in the United States. Significant differences in survey item non-response for nine of 15 survey questions were seen based on language preference. Hispanic respondents choosing to complete the survey in Spanish had a significantly fewer number of call attempts for survey completion compared to their Hispanic counterparts choosing to communicate in English. Conclusions Including additional measures of acculturation and increasing the sample size of Hispanic persons in a national health survey such as the Behavioral Risk Factor Surveillance System may result in more precise findings that could be used to better target prevention and health care needs for an ethnic minority population. PMID:20412575

  20. Risk factors for depression in community-treated epilepsy: systematic review.

    PubMed

    Lacey, Cameron J; Salzberg, Michael R; D'Souza, Wendyl J

    2015-02-01

    Depression is one of the most common psychiatric comorbidities in epilepsy; however, the factors contributing to this association remain unclear. There is a growing consensus that methodological limitations, particularly selection bias, affect many of the original studies. A systematic review focussed on community-based studies offers an alternative approach for the identification of the risk factors for depression. Searches were performed in MEDLINE (Ovid), 2000 to 31 December 2013, EMBASE, and Google Scholar to identify studies examining risk factors for depression in epilepsy. Community-based studies of adults with epilepsy that reported at least one risk factor for depression were included. The search identified 17 studies that met selection criteria, representing a combined total of 12,212 people with epilepsy with a mean sample size of 718. The most consistent risk factors for depression were sociodemographic factors, despite the fact that most studies focus on epilepsy-related factors. Most studies lacked a systematic conceptual approach to investigating depression, and few risk factors were consistently well studied. Future community-based studies require a detailed systematic approach to improve the ability to detect risk factors for depression in epilepsy. Psychological factors were rarely studied in community-based samples with epilepsy, although the consistent association with depression in the few studies that did suggests this warrants further examination. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Salmonella enteritidis surveillance by egg immunology: impact of the sampling scheme on the release of contaminated table eggs.

    PubMed

    Klinkenberg, Don; Thomas, Ekelijn; Artavia, Francisco F Calvo; Bouma, Annemarie

    2011-08-01

    Design of surveillance programs to detect infections could benefit from more insight into sampling schemes. We address the effect of sampling schemes for Salmonella Enteritidis surveillance in laying hens. Based on experimental estimates for the transmission rate in flocks, and the characteristics of an egg immunological test, we have simulated outbreaks with various sampling schemes, and with the current boot swab program with a 15-week sampling interval. Declaring a flock infected based on a single positive egg was not possible because test specificity was too low. Thus, a threshold number of positive eggs was defined to declare a flock infected, and, for small sample sizes, eggs from previous samplings had to be included in a cumulative sample to guarantee a minimum flock level specificity. Effectiveness of surveillance was measured by the proportion of outbreaks detected, and by the number of contaminated table eggs brought on the market. The boot swab program detected 90% of the outbreaks, with 75% fewer contaminated eggs compared to no surveillance, whereas the baseline egg program (30 eggs each 15 weeks) detected 86%, with 73% fewer contaminated eggs. We conclude that a larger sample size results in more detected outbreaks, whereas a smaller sampling interval decreases the number of contaminated eggs. Decreasing sample size and interval simultaneously reduces the number of contaminated eggs, but not indefinitely: the advantage of more frequent sampling is counterbalanced by the cumulative sample including less recently laid eggs. Apparently, optimizing surveillance has its limits when test specificity is taken into account. © 2011 Society for Risk Analysis.

  2. Management of Patients With Pancreatic Cysts: Analysis of Possible False-Negative Cases of Malignancy.

    PubMed

    Kowalski, Thomas; Siddiqui, Ali; Loren, David; Mertz, Howard R; Mallat, Damien; Haddad, Nadim; Malhotra, Nidhi; Sadowski, Brett; Lybik, Mark J; Patel, Sandeep N; Okoh, Emuejevoke; Rosenkranz, Laura; Karasik, Michael; Golioto, Michael; Linder, Jeffrey; Catalano, Marc F; Al-Haddad, Mohammad A

    2016-09-01

    To examine the utility of integrated molecular pathology (IMP) in managing surveillance of pancreatic cysts based on outcomes and analysis of false negatives (FNs) from a previously published cohort (n=492). In endoscopic ultrasound with fine-needle aspiration (EUS-FNA) of cyst fluid lacking malignant cytology, IMP demonstrated better risk stratification for malignancy at approximately 3 years' follow-up than International Consensus Guideline (Fukuoka) 2012 management recommendations in such cases. Patient outcomes and clinical features of Fukuoka and IMP FN cases were reviewed. Practical guidance for appropriate surveillance intervals and surgery decisions using IMP were derived from follow-up data, considering EUS-FNA sampling limitations and high-risk clinical circumstances observed. Surveillance intervals for patients based on IMP predictive value were compared with those of Fukuoka. Outcomes at follow-up for IMP low-risk diagnoses supported surveillance every 2 to 3 years, independent of cyst size, when EUS-FNA sampling limitations or high-risk clinical circumstances were absent. In 10 of 11 patients with FN IMP diagnoses (2% of cohort), EUS-FNA sampling limitations existed; Fukuoka identified high risk in 9 of 11 cases. In 4 of 6 FN cases by Fukuoka (1% of cohort), IMP identified high risk. Overall, 55% of cases had possible sampling limitations and 37% had high-risk clinical circumstances. Outcomes support more cautious management in such cases when using IMP. Adjunct use of IMP can provide evidence for relaxed surveillance of patients with benign cysts that meet Fukuoka criteria for closer observation or surgery. Although infrequent, FN results with IMP can be associated with EUS-FNA sampling limitations or high-risk clinical circumstances.

  3. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    PubMed

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  4. One-step estimation of networked population size: Respondent-driven capture-recapture with anonymity.

    PubMed

    Khan, Bilal; Lee, Hsuan-Wei; Fellows, Ian; Dombrowski, Kirk

    2018-01-01

    Size estimation is particularly important for populations whose members experience disproportionate health issues or pose elevated health risks to the ambient social structures in which they are embedded. Efforts to derive size estimates are often frustrated when the population is hidden or hard-to-reach in ways that preclude conventional survey strategies, as is the case when social stigma is associated with group membership or when group members are involved in illegal activities. This paper extends prior research on the problem of network population size estimation, building on established survey/sampling methodologies commonly used with hard-to-reach groups. Three novel one-step, network-based population size estimators are presented, for use in the context of uniform random sampling, respondent-driven sampling, and when networks exhibit significant clustering effects. We give provably sufficient conditions for the consistency of these estimators in large configuration networks. Simulation experiments across a wide range of synthetic network topologies validate the performance of the estimators, which also perform well on a real-world location-based social networking data set with significant clustering. Finally, the proposed schemes are extended to allow them to be used in settings where participant anonymity is required. Systematic experiments show favorable tradeoffs between anonymity guarantees and estimator performance. Taken together, we demonstrate that reasonable population size estimates are derived from anonymous respondent driven samples of 250-750 individuals, within ambient populations of 5,000-40,000. The method thus represents a novel and cost-effective means for health planners and those agencies concerned with health and disease surveillance to estimate the size of hidden populations. We discuss limitations and future work in the concluding section.

  5. Sensitivity to Uncertainty in Asteroid Impact Risk Assessment

    NASA Astrophysics Data System (ADS)

    Mathias, D.; Wheeler, L.; Prabhu, D. K.; Aftosmis, M.; Dotson, J.; Robertson, D. K.

    2015-12-01

    The Engineering Risk Assessment (ERA) team at NASA Ames Research Center is developing a physics-based impact risk model for probabilistically assessing threats from potential asteroid impacts on Earth. The model integrates probabilistic sampling of asteroid parameter ranges with physics-based analyses of entry, breakup, and impact to estimate damage areas and casualties from various impact scenarios. Assessing these threats is a highly coupled, dynamic problem involving significant uncertainties in the range of expected asteroid characteristics, how those characteristics may affect the level of damage, and the fidelity of various modeling approaches and assumptions. The presented model is used to explore the sensitivity of impact risk estimates to these uncertainties in order to gain insight into what additional data or modeling refinements are most important for producing effective, meaningful risk assessments. In the extreme cases of very small or very large impacts, the results are generally insensitive to many of the characterization and modeling assumptions. However, the nature of the sensitivity can change across moderate-sized impacts. Results will focus on the value of additional information in this critical, mid-size range, and how this additional data can support more robust mitigation decisions.

  6. Improving the accuracy of effect-directed analysis: the role of bioavailability.

    PubMed

    You, Jing; Li, Huizhen

    2017-12-13

    Aquatic ecosystems have been suffering from contamination by multiple stressors. Traditional chemical-based risk assessment usually fails to explain the toxicity contributions from contaminants that are not regularly monitored or that have an unknown identity. Diagnosing the causes of noted adverse outcomes in the environment is of great importance in ecological risk assessment and in this regard effect-directed analysis (EDA) has been designed to fulfill this purpose. The EDA approach is now increasingly used in aquatic risk assessment owing to its specialty in achieving effect-directed nontarget analysis; however, a lack of environmental relevance makes conventional EDA less favorable. In particular, ignoring the bioavailability in EDA may cause a biased and even erroneous identification of causative toxicants in a mixture. Taking bioavailability into consideration is therefore of great importance to improve the accuracy of EDA diagnosis. The present article reviews the current status and applications of EDA practices that incorporate bioavailability. The use of biological samples is the most obvious way to include bioavailability into EDA applications, but its development is limited due to the small sample size and lack of evidence for metabolizable compounds. Bioavailability/bioaccessibility-based extraction (bioaccessibility-directed and partitioning-based extraction) and passive-dosing techniques are recommended to be used to integrate bioavailability into EDA diagnosis in abiotic samples. Lastly, the future perspectives of expanding and standardizing the use of biological samples and bioavailability-based techniques in EDA are discussed.

  7. Multi-Mission System Analysis for Planetary Entry (M-SAPE) Version 1

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid; Glaab, Louis; Winski, Richard G.; Maddock, Robert W.; Emmett, Anjie L.; Munk, Michelle M.; Agrawal, Parul; Sepka, Steve; Aliaga, Jose; Zarchi, Kerry; hide

    2014-01-01

    This report describes an integrated system for Multi-mission System Analysis for Planetary Entry (M-SAPE). The system in its current form is capable of performing system analysis and design for an Earth entry vehicle suitable for sample return missions. The system includes geometry, mass sizing, impact analysis, structural analysis, flight mechanics, TPS, and a web portal for user access. The report includes details of M-SAPE modules and provides sample results. Current M-SAPE vehicle design concept is based on Mars sample return (MSR) Earth entry vehicle design, which is driven by minimizing risk associated with sample containment (no parachute and passive aerodynamic stability). By M-SAPE exploiting a common design concept, any sample return mission, particularly MSR, will benefit from significant risk and development cost reductions. The design provides a platform by which technologies and design elements can be evaluated rapidly prior to any costly investment commitment.

  8. Evaluation of exposure to airborne heavy metals at gun shooting ranges.

    PubMed

    Lach, Karel; Steer, Brian; Gorbunov, Boris; Mička, Vladimír; Muir, Robert B

    2015-04-01

    Aerosols formed during shooting events were studied with various techniques including the wide range size resolving sampling system Nano-ID(®) Select, followed by inductively coupled plasma mass spectrometry chemical analysis, scanning electron microscopy, and fast mobility particle sizing. The total lead mass aerosol concentration ranged from 2.2 to 72 µg m(-3). It was shown that the mass concentration of the most toxic compound lead is much lower than the total mass concentration. The deposition fraction in various compartments of the respiratory system was calculated using the ICRP lung deposition model. It was found that the deposition fraction in the alveolar range varies by a factor >3 for the various aerosols collected, depending on the aerosol size distribution and total aerosol concentration, demonstrating the importance of size resolved sampling in health risk evaluation. The proportion of the total mass of airborne particles deposited in the respiratory tract varies from 34 to 70%, with a median of 55.9%, suggesting the health risk based upon total mass significantly overestimates the accumulated dose and therefore the health risk. A comparison between conventional and so called 'green' ammunition confirmed significant lowering of concentrations of lead and other toxic metals like antimony in the atmosphere of indoor shooting ranges using 'green' ammunition, although higher concentrations of manganese and boron were measured. These metals are likely to be the constituents of new types of primers. They occur predominantly in the size fraction <250 nm of aerosols. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  9. Efficacy of a strategy for implementing a guideline for the control of cardiovascular risk in a primary healthcare setting: the SIRVA2 study a controlled, blinded community intervention trial randomised by clusters

    PubMed Central

    2011-01-01

    This work describes the methodology used to assess a strategy for implementing clinical practice guidelines (CPG) for cardiovascular risk control in a health area of Madrid. Background The results on clinical practice of introducing CPGs have been little studied in Spain. The strategy used to implement a CPG is known to influence its final use. Strategies based on the involvement of opinion leaders and that are easily executed appear to be among the most successful. Aim The main aim of the present work was to compare the effectiveness of two strategies for implementing a CPG designed to reduce cardiovascular risk in the primary healthcare setting, measured in terms of improvements in the recording of calculated cardiovascular risk or specific risk factors in patients' medical records, the control of cardiovascular risk factors, and the incidence of cardiovascular events. Methods This study involved a controlled, blinded community intervention in which the 21 health centres of the Number 2 Health Area of Madrid were randomly assigned by clusters to be involved in either a proposed CPG implementation strategy to reduce cardiovascular risk, or the normal dissemination strategy. The study subjects were patients ≥ 45 years of age whose health cards showed them to belong to the studied health area. The main variable examined was the proportion of patients whose medical histories included the calculation of their cardiovascular risk or that explicitly mentioned the presence of variables necessary for its calculation. The sample size was calculated for a comparison of proportions with alpha = 0.05 and beta = 0.20, and assuming that the intervention would lead to a 15% increase in the measured variables. Corrections were made for the design effect, assigning a sample size to each cluster proportional to the size of the population served by the corresponding health centre, and assuming losses of 20%. This demanded a final sample size of 620 patients. Data were analysed using summary measures for each cluster, both in making estimates and for hypothesis testing. Analysis of the variables was made on an intention-to-treat basis. Trial Registration ClinicalTrials.gov: NCT01270022 PMID:21504570

  10. Socioeconomic status, urbanicity and risk behaviors in Mexican youth: an analysis of three cross-sectional surveys

    PubMed Central

    2011-01-01

    Background The relationship between urbanicity and adolescent health is a critical issue for which little empirical evidence has been reported. Although an association has been suggested, a dichotomous rural versus urban comparison may not succeed in identifying differences between adolescent contexts. This study aims to assess the influence of locality size on risk behaviors in a national sample of young Mexicans living in low-income households, while considering the moderating effect of socioeconomic status (SES). Methods This is a secondary analysis of three national surveys of low-income households in Mexico in different settings: rural, semi-urban and urban areas. We analyzed risk behaviors in 15-21-year-olds and their potential relation to urbanicity. The risk behaviors explored were: tobacco and alcohol consumption, sexual initiation and condom use. The adolescents' localities of residence were classified according to the number of inhabitants in each locality. We used a logistical model to identify an association between locality size and risk behaviors, including an interaction term with SES. Results The final sample included 17,974 adolescents from 704 localities in Mexico. Locality size was associated with tobacco and alcohol consumption, showing a similar effect throughout all SES levels: the larger the size of the locality, the lower the risk of consuming tobacco or alcohol compared with rural settings. The effect of locality size on sexual behavior was more complex. The odds of adolescent condom use were higher in larger localities only among adolescents in the lowest SES levels. We found no statically significant association between locality size and sexual initiation. Conclusions The results suggest that in this sample of adolescents from low-income areas in Mexico, risk behaviors are related to locality size (number of inhabitants). Furthermore, for condom use, this relation is moderated by SES. Such heterogeneity suggests the need for more detailed analyses of both the effects of urbanicity on behavior, and the responses--which are also heterogeneous--required to address this situation. PMID:22129110

  11. Does Static-99 predict recidivism among older sexual offenders?

    PubMed

    Hanson, R K

    2006-10-01

    Static-99 (Hanson & Thornton, 2000) is the most commonly used actuarial risk tool for estimating sexual offender recidivism risk. Recent research has suggested that its methods of accounting for the offenders' ages may be insufficient to capture declines in recidivism risk associated with advanced age. Using data from 8 samples (combined size of 3,425 sexual offenders), the present study found that older offenders had lower Static-99 scores than younger offenders and that Static-99 was moderately accurate in estimating relative recidivism risk in all age groups. Older offenders, however, had lower sexual recidivism rates than would be expected based on their Static-99 risk categories. Consequently, evaluators using Static-99 should considered advanced age in their overall estimate of risk.

  12. Detecting a Weak Association by Testing its Multiple Perturbations: a Data Mining Approach

    NASA Astrophysics Data System (ADS)

    Lo, Min-Tzu; Lee, Wen-Chung

    2014-05-01

    Many risk factors/interventions in epidemiologic/biomedical studies are of minuscule effects. To detect such weak associations, one needs a study with a very large sample size (the number of subjects, n). The n of a study can be increased but unfortunately only to an extent. Here, we propose a novel method which hinges on increasing sample size in a different direction-the total number of variables (p). We construct a p-based `multiple perturbation test', and conduct power calculations and computer simulations to show that it can achieve a very high power to detect weak associations when p can be made very large. As a demonstration, we apply the method to analyze a genome-wide association study on age-related macular degeneration and identify two novel genetic variants that are significantly associated with the disease. The p-based method may set a stage for a new paradigm of statistical tests.

  13. Evaluation of alternative model selection criteria in the analysis of unimodal response curves using CART

    USGS Publications Warehouse

    Ribic, C.A.; Miller, T.W.

    1998-01-01

    We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.

  14. Estimates of Intraclass Correlation Coefficients from Longitudinal Group-Randomized Trials of Adolescent HIV/STI/Pregnancy Prevention Programs

    ERIC Educational Resources Information Center

    Glassman, Jill R.; Potter, Susan C.; Baumler, Elizabeth R.; Coyle, Karin K.

    2015-01-01

    Introduction: Group-randomized trials (GRTs) are one of the most rigorous methods for evaluating the effectiveness of group-based health risk prevention programs. Efficiently designing GRTs with a sample size that is sufficient for meeting the trial's power and precision goals while not wasting resources exceeding them requires estimates of the…

  15. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Heavy metals in the gold mine soil of the upstream area of a metropolitan drinking water source.

    PubMed

    Ding, Huaijian; Ji, Hongbing; Tang, Lei; Zhang, Aixing; Guo, Xinyue; Li, Cai; Gao, Yang; Briki, Mergem

    2016-02-01

    Pinggu District is adjacent to the county of Miyun, which contains the largest drinking water source of Beijing (Miyun Reservoir). The Wanzhuang gold field and tailing deposits are located in Pinggu, threatening Beijing's drinking water security. In this study, soil samples were collected from the surface of the mining area and the tailings piles and analyzed for physical and chemical properties, as well as heavy metal contents and particle size fraction to study the relationship between degree of pollution degree and particle size. Most metal concentrations in the gold mine soil samples exceeded the background levels in Beijing. The spatial distribution of As, Cd, Cu, Pb, and Zn was the same, while that of Cr and Ni was relatively similar. Trace element concentrations increased in larger particles, decreased in the 50-74 μm size fraction, and were lowest in the <2 μm size fraction. Multivariate analysis showed that Cu, Cd, Zn, and Pb originated from anthropogenic sources, while Cr, Ni, and Sc were of natural origin. The geo-accumulation index indicated serious Pb, As, and Cd pollution, but moderate to no Ni, Cr, and Hg pollution. The Tucker 3 model revealed three factors for particle fractions, metals, and samples. There were two factors in model A and three factors for both the metals and samples (models B and C, respectively). The potential ecological risk index shows that most of the study areas have very high potential ecological risk, a small portion has high potential ecological risk, and only a few sampling points on the perimeter have moderate ecological risk, with higher risk closer to the mining area.

  17. A Novel Multi-Approach Protocol for the Characterization of Occupational Exposure to Organic Dust-Swine Production Case Study.

    PubMed

    Viegas, Carla; Faria, Tiago; Monteiro, Ana; Caetano, Liliana Aranha; Carolino, Elisabete; Quintal Gomes, Anita; Viegas, Susana

    2017-12-27

    Swine production has been associated with health risks and workers' symptoms. In Portugal, as in other countries, large-scale swine production involves several activities in the swine environment that require direct intervention, increasing workers' exposure to organic dust. This study describes an updated protocol for the assessment of occupational exposure to organic dust, to unveil an accurate scenario regarding occupational and environmental risks for workers' health. The particle size distribution was characterized regarding mass concentration in five different size ranges (PM0.5, PM1, PM2.5, PM5, PM10). Bioburden was assessed, by both active and passive sampling methods, in air, on surfaces, floor covering and feed samples, and analyzed through culture based-methods and qPCR. Smaller size range particles exhibited the highest counts, with indoor particles showing higher particle counts and mass concentration than outdoor particles. The limit values suggested for total bacteria load were surpassed in 35.7% (10 out of 28) of samples and for fungi in 65.5% (19 out of 29) of samples. Among Aspergillus genera, section Circumdati was the most prevalent (55%) on malt extract agar (MEA) and Versicolores the most identified (50%) on dichloran glycerol (DG18). The results document a wide characterization of occupational exposure to organic dust on swine farms, being useful for policies and stakeholders to act to improve workers' safety. The methods of sampling and analysis employed were the most suitable considering the purpose of the study and should be adopted as a protocol to be followed in future exposure assessments in this occupational environment.

  18. Method of assessing a lipid-related health risk based on ion mobility analysis of lipoproteins

    DOEpatents

    Benner, W. Henry; Krauss, Ronald M.; Blanche, Patricia J.

    2010-12-14

    A medical diagnostic method and instrumentation system for analyzing noncovalently bonded agglomerated biological particles is described. The method and system comprises: a method of preparation for the biological particles; an electrospray generator; an alpha particle radiation source; a differential mobility analyzer; a particle counter; and data acquisition and analysis means. The medical device is useful for the assessment of human diseases, such as cardiac disease risk and hyperlipidemia, by rapid quantitative analysis of lipoprotein fraction densities. Initially, purification procedures are described to reduce an initial blood sample to an analytical input to the instrument. The measured sizes from the analytical sample are correlated with densities, resulting in a spectrum of lipoprotein densities. The lipoprotein density distribution can then be used to characterize cardiac and other lipid-related health risks.

  19. Adaptive web sampling.

    PubMed

    Thompson, Steven K

    2006-12-01

    A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.

  20. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  1. Comparison of methods for estimating the attributable risk in the context of survival analysis.

    PubMed

    Gassama, Malamine; Bénichou, Jacques; Dartois, Laureen; Thiébaut, Anne C M

    2017-01-23

    The attributable risk (AR) measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier's estimator, one semiparametric based on Cox's model, and one parametric based on the piecewise constant hazards model, as well as one simpler method based on estimated exposure prevalence at baseline and Cox's model hazard ratio. We considered a fixed binary exposure with varying exposure probabilities and strengths of association, and generated event times from a proportional hazards model with constant or monotonic (decreasing or increasing) Weibull baseline hazard, as well as from a nonproportional hazards model. We simulated 1,000 independent samples of size 1,000 or 10,000. The methods were compared in terms of mean bias, mean estimated standard error, empirical standard deviation and 95% confidence interval coverage probability at four equally spaced time points. Under proportional hazards, all five methods yielded unbiased results regardless of sample size. Nonparametric methods displayed greater variability than other approaches. All methods showed satisfactory coverage except for nonparametric methods at the end of follow-up for a sample size of 1,000 especially. With nonproportional hazards, nonparametric methods yielded similar results to those under proportional hazards, whereas semiparametric and parametric approaches that both relied on the proportional hazards assumption performed poorly. These methods were applied to estimate the AR of breast cancer due to menopausal hormone therapy in 38,359 women of the E3N cohort. In practice, our study suggests to use the semiparametric or parametric approaches to estimate AR as a function of time in cohort studies if the proportional hazards assumption appears appropriate.

  2. Using the internet to recruit rural MSM for HIV risk assessment: sampling issues.

    PubMed

    Bowen, Anne; Williams, Mark; Horvath, Keith

    2004-09-01

    The Internet is an emerging research tool that may be useful for contacting and working with rural men who have sex with men (MSM). Little is known about HIV risks for rural men and Internet methodological issues are only beginning to be examined. Internet versus conventionally recruited samples have shown both similarities and differences in their demographic characteristics. In this study, rural MSM from three sizes of town were recruited by two methods: conventional (e.g. face-to-face/snowball) or Internet. After stratifying for size of city, demographic characteristics of the two groups were similar. Both groups had ready access to the Internet. Patterns of sexual risk were similar across the city sizes but varied by recruitment approach, with the Internet group presenting a somewhat higher HIV sexual risk profile. Overall, these findings suggest the Internet provides a useful and low cost approach to recruiting and assessing HIV sexual risks for rural White MSM. Further research is needed on methods for recruiting rural minority MSM.

  3. Cardiovascular risk management in patients with coronary heart disease in primary care: variation across countries and practices. An observational study based on quality indicators.

    PubMed

    van Lieshout, Jan; Grol, Richard; Campbell, Stephen; Falcoff, Hector; Capell, Eva Frigola; Glehr, Mathias; Goldfracht, Margalit; Kumpusalo, Esko; Künzi, Beat; Ludt, Sabine; Petek, Davorina; Vanderstighelen, Veerle; Wensing, Michel

    2012-10-05

    Primary care has an important role in cardiovascular risk management (CVRM) and a minimum size of scale of primary care practices may be needed for efficient delivery of CVRM . We examined CVRM in patients with coronary heart disease (CHD) in primary care and explored the impact of practice size. In an observational study in 8 countries we sampled CHD patients in primary care practices and collected data from electronic patient records. Practice samples were stratified according to practice size and urbanisation; patients were selected using coded diagnoses when available. CVRM was measured on the basis of internationally validated quality indicators. In the analyses practice size was defined in terms of number of patients registered of visiting the practice. We performed multilevel regression analyses controlling for patient age and sex. We included 181 practices (63% of the number targeted). Two countries included a convenience sample of practices. Data from 2960 CHD patients were available. Some countries used methods supplemental to coded diagnoses or other inclusion methods introducing potential inclusion bias. We found substantial variation on all CVRM indicators across practices and countries. We computed aggregated practice scores as percentage of patients with a positive outcome. Rates of risk factor recording varied from 55% for physical activity as the mean practice score across all practices (sd 32%) to 94% (sd 10%) for blood pressure. Rates for reaching treatment targets for systolic blood pressure, diastolic blood pressure and LDL cholesterol were 46% (sd 21%), 86% (sd 12%) and 48% (sd 22%) respectively. Rates for providing recommended cholesterol lowering and antiplatelet drugs were around 80%, and 70% received influenza vaccination. Practice size was not associated to indicator scores with one exception: in Slovenia larger practices performed better. Variation was more related to differences between practices than between countries. CVRM measured by quality indicators showed wide variation within and between countries and possibly leaves room for improvement in all countries involved. Few associations of performance scores with practice size were found.

  4. Ciguatoxic Potential of Brown-Marbled Grouper in Relation to Fish Size and Geographical Origin

    PubMed Central

    Chan, Thomas Y. K.

    2015-01-01

    To determine the ciguatoxic potential of brown-marbled grouper (Epinephelus fuscoguttatus) in relation to fish size and geographical origin, this review systematically analyzed: 1) reports of large ciguatera outbreaks and outbreaks with description of the fish size; 2) Pacific ciguatoxin (P-CTX) profiles and levels and mouse bioassay results in fish samples from ciguatera incidents; 3) P-CTX profiles and levels and risk of toxicity in relation to fish size and origin; 4) regulatory measures restricting fish trade and fish size preference of the consumers. P-CTX levels in flesh and size dependency of toxicity indicate that the risk of ciguatera after eating E. fuscoguttatus varies with its geographical origin. For a large-sized grouper, it is necessary to establish legal size limits and control measures to protect public health and prevent overfishing. More risk assessment studies are required for E. fuscoguttatus to determine the size threshold above which the risk of ciguatera significantly increases. PMID:26324735

  5. Biostatistics Series Module 5: Determining Sample Size

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437

  6. 76 FR 65165 - Importation of Plants for Planting; Risk-Based Sampling and Inspection Approach and Propagative...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-20

    ...] Importation of Plants for Planting; Risk-Based Sampling and Inspection Approach and Propagative Monitoring and... advising the public of our decision to implement a risk-based sampling approach for the inspection of... risk-based sampling and inspection approach will allow us to target high-risk plants for planting for...

  7. Utility of Recent Studies to Assess the National Research Council 2001 Estimates of Cancer Risk from Ingested Arsenic

    PubMed Central

    Gibb, Herman; Haver, Cary; Gaylor, David; Ramasamy, Santhini; Lee, Janice S.; Lobdell, Danelle; Wade, Timothy; Chen, Chao; White, Paul; Sams, Reeder

    2011-01-01

    Objective The purpose of this review is to evaluate the impact of recent epidemiologic literature on the National Research Council (NRC) assessment of the lung and bladder cancer risks from ingesting low concentrations (< 100 μg/L) of arsenic-contaminated water. Data sources, extraction, and synthesis PubMed was searched for epidemiologic studies pertinent to the lung and bladder cancer risk estimates from low-dose arsenic exposure. Articles published from 2001, the date of the NRC assessment, through September 2010 were included. Fourteen epidemiologic studies on lung and bladder cancer risk were identified as potentially useful for the analysis. Conclusions Recent epidemiologic studies that have investigated the risk of lung and bladder cancer from low arsenic exposure are limited in their ability to detect the NRC estimates of excess risk because of sample size and less than lifetime exposure. Although the ecologic nature of the Taiwanese studies on which the NRC estimates are based present certain limitations, the data from these studies have particular strengths in that they describe lung and bladder cancer risks resulting from lifetime exposure in a large population and remain the best data on which to conduct quantitative risk assessment. Continued follow-up of a population in northeastern Taiwan, however, offers the best opportunity to improve the cancer risk assessment for arsenic in drinking water. Future studies of arsenic < 100 μg/L in drinking water and lung and bladder cancer should consider adequacy of the sample size, the synergistic relationship of arsenic and smoking, duration of arsenic exposure, age when exposure began and ended, and histologic subtype. PMID:21030336

  8. Age at menopause: imputing age at menopause for women with a hysterectomy with application to risk of postmenopausal breast cancer

    PubMed Central

    Rosner, Bernard; Colditz, Graham A.

    2011-01-01

    Purpose Age at menopause, a major marker in the reproductive life, may bias results for evaluation of breast cancer risk after menopause. Methods We follow 38,948 premenopausal women in 1980 and identify 2,586 who reported hysterectomy without bilateral oophorectomy, and 31,626 who reported natural menopause during 22 years of follow-up. We evaluate risk factors for natural menopause, impute age at natural menopause for women reporting hysterectomy without bilateral oophorectomy and estimate the hazard of reaching natural menopause in the next 2 years. We apply this imputed age at menopause to both increase sample size and to evaluate the relation between postmenopausal exposures and risk of breast cancer. Results Age, cigarette smoking, age at menarche, pregnancy history, body mass index, history of benign breast disease, and history of breast cancer were each significantly related to age at natural menopause; duration of oral contraceptive use and family history of breast cancer were not. The imputation increased sample size substantially and although some risk factors after menopause were weaker in the expanded model (height, and alcohol use), use of hormone therapy is less biased. Conclusions Imputing age at menopause increases sample size, broadens generalizability making it applicable to women with hysterectomy, and reduces bias. PMID:21441037

  9. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  10. A Novel Multi-Approach Protocol for the Characterization of Occupational Exposure to Organic Dust—Swine Production Case Study

    PubMed Central

    Faria, Tiago; Monteiro, Ana; Carolino, Elisabete; Quintal Gomes, Anita

    2017-01-01

    Swine production has been associated with health risks and workers’ symptoms. In Portugal, as in other countries, large-scale swine production involves several activities in the swine environment that require direct intervention, increasing workers’ exposure to organic dust. This study describes an updated protocol for the assessment of occupational exposure to organic dust, to unveil an accurate scenario regarding occupational and environmental risks for workers’ health. The particle size distribution was characterized regarding mass concentration in five different size ranges (PM0.5, PM1, PM2.5, PM5, PM10). Bioburden was assessed, by both active and passive sampling methods, in air, on surfaces, floor covering and feed samples, and analyzed through culture based-methods and qPCR. Smaller size range particles exhibited the highest counts, with indoor particles showing higher particle counts and mass concentration than outdoor particles. The limit values suggested for total bacteria load were surpassed in 35.7% (10 out of 28) of samples and for fungi in 65.5% (19 out of 29) of samples. Among Aspergillus genera, section Circumdati was the most prevalent (55%) on malt extract agar (MEA) and Versicolores the most identified (50%) on dichloran glycerol (DG18). The results document a wide characterization of occupational exposure to organic dust on swine farms, being useful for policies and stakeholders to act to improve workers’ safety. The methods of sampling and analysis employed were the most suitable considering the purpose of the study and should be adopted as a protocol to be followed in future exposure assessments in this occupational environment. PMID:29280976

  11. Securing quality of camera-based biomedical optics

    NASA Astrophysics Data System (ADS)

    Guse, Frank; Kasper, Axel; Zinter, Bob

    2009-02-01

    As sophisticated optical imaging technologies move into clinical applications, manufacturers need to guarantee their products meet required performance criteria over long lifetimes and in very different environmental conditions. A consistent quality management marks critical components features derived from end-users requirements in a top-down approach. Careful risk analysis in the design phase defines the sample sizes for production tests, whereas first article inspection assures the reliability of the production processes. We demonstrate the application of these basic quality principles to camera-based biomedical optics for a variety of examples including molecular diagnostics, dental imaging, ophthalmology and digital radiography, covering a wide range of CCD/CMOS chip sizes and resolutions. Novel concepts in fluorescence detection and structured illumination are also highlighted.

  12. Improving risk classification of critical illness with biomarkers: a simulation study

    PubMed Central

    Seymour, Christopher W.; Cooke, Colin R.; Wang, Zheyu; Kerr, Kathleen F.; Yealy, Donald M.; Angus, Derek C.; Rea, Thomas D.; Kahn, Jeremy M.; Pepe, Margaret S.

    2012-01-01

    Purpose Optimal triage of patients at risk of critical illness requires accurate risk prediction, yet little data exists on the performance criteria required of a potential biomarker to be clinically useful. Materials and Methods We studied an adult cohort of non-arrest, non-trauma emergency medical services encounters transported to a hospital from 2002–2006. We simulated hypothetical biomarkers increasingly associated with critical illness during hospitalization, and determined the biomarker strength and sample size necessary to improve risk classification beyond a best clinical model. Results Of 57,647 encounters, 3,121 (5.4%) were hospitalized with critical illness and 54,526 (94.6%) without critical illness. The addition of a moderate strength biomarker (odds ratio=3.0 for critical illness) to a clinical model improved discrimination (c-statistic 0.85 vs. 0.8, p<0.01), reclassification (net reclassification improvement=0.15, 95%CI: 0.13,0.18), and increased the proportion of cases in the highest risk categoryby+8.6% (95%CI: 7.5,10.8%). Introducing correlation between the biomarker and physiological variables in the clinical risk score did not modify the results. Statistically significant changes in net reclassification required a sample size of at least 1000 subjects. Conclusions Clinical models for triage of critical illness could be significantly improved by incorporating biomarkers, yet, substantial sample sizes and biomarker strength may be required. PMID:23566734

  13. A Direct Comparison of Two Densely Sampled HIV Epidemics: The UK and Switzerland

    NASA Astrophysics Data System (ADS)

    Ragonnet-Cronin, Manon L.; Shilaih, Mohaned; Günthard, Huldrych F.; Hodcroft, Emma B.; Böni, Jürg; Fearnhill, Esther; Dunn, David; Yerly, Sabine; Klimkait, Thomas; Aubert, Vincent; Yang, Wan-Lin; Brown, Alison E.; Lycett, Samantha J.; Kouyos, Roger; Brown, Andrew J. Leigh

    2016-09-01

    Phylogenetic clustering approaches can elucidate HIV transmission dynamics. Comparisons across countries are essential for evaluating public health policies. Here, we used a standardised approach to compare the UK HIV Drug Resistance Database and the Swiss HIV Cohort Study while maintaining data-protection requirements. Clusters were identified in subtype A1, B and C pol phylogenies. We generated degree distributions for each risk group and compared distributions between countries using Kolmogorov-Smirnov (KS) tests, Degree Distribution Quantification and Comparison (DDQC) and bootstrapping. We used logistic regression to predict cluster membership based on country, sampling date, risk group, ethnicity and sex. We analysed >8,000 Swiss and >30,000 UK subtype B sequences. At 4.5% genetic distance, the UK was more clustered and MSM and heterosexual degree distributions differed significantly by the KS test. The KS test is sensitive to variation in network scale, and jackknifing the UK MSM dataset to the size of the Swiss dataset removed the difference. Only heterosexuals varied based on the DDQC, due to UK male heterosexuals who clustered exclusively with MSM. Their removal eliminated this difference. In conclusion, the UK and Swiss HIV epidemics have similar underlying dynamics and observed differences in clustering are mainly due to different population sizes.

  14. Prevalence of diseases and statistical power of the Japan Nurses' Health Study.

    PubMed

    Fujita, Toshiharu; Hayashi, Kunihiko; Katanoda, Kota; Matsumura, Yasuhiro; Lee, Jung Su; Takagi, Hirofumi; Suzuki, Shosuke; Mizunuma, Hideki; Aso, Takeshi

    2007-10-01

    The Japan Nurses' Health Study (JNHS) is a long-term, large-scale cohort study investigating the effects of various lifestyle factors and healthcare habits on the health of Japanese women. Based on currently limited statistical data regarding the incidence of disease among Japanese women, our initial sample size was tentatively set at 50,000 during the design phase. The actual number of women who agreed to participate in follow-up surveys was approximately 18,000. Taking into account the actual sample size and new information on disease frequency obtained during the baseline component, we established the prevalence of past diagnoses of target diseases, predicted their incidence, and calculated the statistical power for JNHS follow-up surveys. For all diseases except ovarian cancer, the prevalence of a past diagnosis increased markedly with age, and incidence rates could be predicted based on the degree of increase in prevalence between two adjacent 5-yr age groups. The predicted incidence rate for uterine myoma, hypercholesterolemia, and hypertension was > or =3.0 (per 1,000 women, per year), while the rate of thyroid disease, hepatitis, gallstone disease, and benign breast tumor was predicted to be > or =1.0. For these diseases, the statistical power to detect risk factors with a relative risk of 1.5 or more within ten years, was 70% or higher.

  15. Fabrication and Characterization of Surrogate Glasses Aimed to Validate Nuclear Forensic Techniques

    DTIC Science & Technology

    2017-12-01

    sample is processed while submerged and produces fine sized particles the exposure levels and risk of contamination from the samples is also greatly...induced the partial collapses of the xerogel network strengthened the network while the sample sizes were reduced [22], [26]. As a result the wt...inhomogeneous, making it difficult to clearly determine which features were present in the sample before LDHP and which were caused by it. In this study

  16. The use of group sequential, information-based sample size re-estimation in the design of the PRIMO study of chronic kidney disease.

    PubMed

    Pritchett, Yili; Jemiai, Yannis; Chang, Yuchiao; Bhan, Ishir; Agarwal, Rajiv; Zoccali, Carmine; Wanner, Christoph; Lloyd-Jones, Donald; Cannata-Andía, Jorge B; Thompson, Taylor; Appelbaum, Evan; Audhya, Paul; Andress, Dennis; Zhang, Wuyan; Solomon, Scott; Manning, Warren J; Thadhani, Ravi

    2011-04-01

    Chronic kidney disease is associated with a marked increase in risk for left ventricular hypertrophy and cardiovascular mortality compared with the general population. Therapy with vitamin D receptor activators has been linked with reduced mortality in chronic kidney disease and an improvement in left ventricular hypertrophy in animal studies. PRIMO (Paricalcitol capsules benefits in Renal failure Induced cardia MOrbidity) is a multinational, multicenter randomized controlled trial to assess the effects of paricalcitol (a selective vitamin D receptor activator) on mild to moderate left ventricular hypertrophy in patients with chronic kidney disease. Subjects with mild-moderate chronic kidney disease are randomized to paricalcitol or placebo after confirming left ventricular hypertrophy using a cardiac echocardiogram. Cardiac magnetic resonance imaging is then used to assess left ventricular mass index at baseline, 24 and 48 weeks, which is the primary efficacy endpoint of the study. Because of limited prior data to estimate sample size, a maximum information group sequential design with sample size re-estimation is implemented to allow sample size adjustment based on the nuisance parameter estimated using the interim data. An interim efficacy analysis is planned at a pre-specified time point conditioned on the status of enrollment. The decision to increase sample size depends on the observed treatment effect. A repeated measures analysis model, using available data at Week 24 and 48 with a backup model of an ANCOVA analyzing change from baseline to the final nonmissing observation, are pre-specified to evaluate the treatment effect. Gamma-family of spending function is employed to control family-wise Type I error rate as stopping for success is planned in the interim efficacy analysis. If enrollment is slower than anticipated, the smaller sample size used in the interim efficacy analysis and the greater percent of missing week 48 data might decrease the parameter estimation accuracy, either for the nuisance parameter or for the treatment effect, which might in turn affect the interim decision-making. The application of combining a group sequential design with a sample-size re-estimation in clinical trial design has the potential to improve efficiency and to increase the probability of trial success while ensuring integrity of the study.

  17. Determinants of occupational exposure to metals by gas metal arc welding and risk management measures: a biomonitoring study.

    PubMed

    Persoons, Renaud; Arnoux, Damien; Monssu, Théodora; Culié, Olivier; Roche, Gaëlle; Duffaud, Béatrice; Chalaye, Denis; Maitre, Anne

    2014-12-01

    Welding fumes contain various toxic metals including chromium (Cr), nickel (Ni) and manganese (Mn). An assessment of the risk to health of local and systemic exposure to welding fumes requires the assessment of both external and internal doses. The aims of this study were to test the relevance in small and medium sized enterprises of a biomonitoring strategy based on urine spot-samples, to characterize the factors influencing the internal doses of metals in gas metal arc welders and to recommend effective risk management measures. 137 welders were recruited and urinary levels of metals were measured by ICP-MS on post-shift samples collected at the end of the working week. Cr, Ni and Mn mean concentrations (respectively 0.43, 1.69 and 0.27 μg/g creatinine) were well below occupational health guidance values, but still higher than background levels observed in the general population, confirming the absorption of metals generated in welding fumes. Both welding parameters (nature of base metal, welding technique) and working conditions (confinement, welding and grinding durations, mechanical ventilation and welding experience) were predictive of occupational exposure. Our results confirm the interest of biomonitoring for assessing health risks and recommending risk management measures for welders. Copyright © 2014. Published by Elsevier Ireland Ltd.

  18. Risk of bias reporting in the recent animal focal cerebral ischaemia literature.

    PubMed

    Bahor, Zsanett; Liao, Jing; Macleod, Malcolm R; Bannach-Brown, Alexandra; McCann, Sarah K; Wever, Kimberley E; Thomas, James; Ottavi, Thomas; Howells, David W; Rice, Andrew; Ananiadou, Sophia; Sena, Emily

    2017-10-15

    Findings from in vivo research may be less reliable where studies do not report measures to reduce risks of bias. The experimental stroke community has been at the forefront of implementing changes to improve reporting, but it is not known whether these efforts are associated with continuous improvements. Our aims here were firstly to validate an automated tool to assess risks of bias in published works, and secondly to assess the reporting of measures taken to reduce the risk of bias within recent literature for two experimental models of stroke. We developed and used text analytic approaches to automatically ascertain reporting of measures to reduce risk of bias from full-text articles describing animal experiments inducing middle cerebral artery occlusion (MCAO) or modelling lacunar stroke. Compared with previous assessments, there were improvements in the reporting of measures taken to reduce risks of bias in the MCAO literature but not in the lacunar stroke literature. Accuracy of automated annotation of risk of bias in the MCAO literature was 86% (randomization), 94% (blinding) and 100% (sample size calculation); and in the lacunar stroke literature accuracy was 67% (randomization), 91% (blinding) and 96% (sample size calculation). There remains substantial opportunity for improvement in the reporting of animal research modelling stroke, particularly in the lacunar stroke literature. Further, automated tools perform sufficiently well to identify whether studies report blinded assessment of outcome, but improvements are required in the tools to ascertain whether randomization and a sample size calculation were reported. © 2017 The Author(s).

  19. Perinatal factors and the risk of bipolar disorder in Finland.

    PubMed

    Chudal, Roshan; Sourander, Andre; Polo-Kantola, Päivi; Hinkka-Yli-Salomäki, Susanna; Lehti, Venla; Sucksdorff, Dan; Gissler, Mika; Brown, Alan S

    2014-02-01

    Complications during the perinatal period have been associated with neurodevelopmental disorders like schizophrenia and autism. However, similar studies on bipolar disorder (BPD) have been limited and the findings are inconsistent. The aim of this study was to examine the association between perinatal risk factors and BPD. This nested case-control study, based on the Finnish Prenatal Study of Bipolar Disorders (FIPS-B), identified 724 cases and 1419 matched controls from population based registers. Conditional logistic regression was used to examine the associations between perinatal factors and BPD adjusting for potential confounding due to maternal age, psychiatric history and educational level, place of birth, number of previous births and maternal smoking during pregnancy. Children delivered by planned cesarean section had a 2.5-fold increased risk of BPD (95% CI: 1.32-4.78, P<0.01). No association was seen between other examined perinatal risk factors and BPD. The limitations of this study include: the restriction in the sample to treated cases of BPD in the population, and usage of hospital based clinical diagnosis for case ascertainment. In addition, in spite of the large sample size, there was low power to detect associations for certain exposures including the lowest birth weight category and pre-term birth. Birth by planned cesarean section was associated with risk of BPD, but most other perinatal risk factors examined in this study were not associated with BPD. Larger studies with greater statistical power to detect less common exposures and studies utilizing prospective biomarker-based exposures are necessary in the future. © 2013 Published by Elsevier B.V.

  20. Simulating recurrent event data with hazard functions defined on a total time scale.

    PubMed

    Jahn-Eimermacher, Antje; Ingel, Katharina; Ozga, Ann-Kathrin; Preussler, Stella; Binder, Harald

    2015-03-08

    In medical studies with recurrent event data a total time scale perspective is often needed to adequately reflect disease mechanisms. This means that the hazard process is defined on the time since some starting point, e.g. the beginning of some disease, in contrast to a gap time scale where the hazard process restarts after each event. While techniques such as the Andersen-Gill model have been developed for analyzing data from a total time perspective, techniques for the simulation of such data, e.g. for sample size planning, have not been investigated so far. We have derived a simulation algorithm covering the Andersen-Gill model that can be used for sample size planning in clinical trials as well as the investigation of modeling techniques. Specifically, we allow for fixed and/or random covariates and an arbitrary hazard function defined on a total time scale. Furthermore we take into account that individuals may be temporarily insusceptible to a recurrent incidence of the event. The methods are based on conditional distributions of the inter-event times conditional on the total time of the preceeding event or study start. Closed form solutions are provided for common distributions. The derived methods have been implemented in a readily accessible R script. The proposed techniques are illustrated by planning the sample size for a clinical trial with complex recurrent event data. The required sample size is shown to be affected not only by censoring and intra-patient correlation, but also by the presence of risk-free intervals. This demonstrates the need for a simulation algorithm that particularly allows for complex study designs where no analytical sample size formulas might exist. The derived simulation algorithm is seen to be useful for the simulation of recurrent event data that follow an Andersen-Gill model. Next to the use of a total time scale, it allows for intra-patient correlation and risk-free intervals as are often observed in clinical trial data. Its application therefore allows the simulation of data that closely resemble real settings and thus can improve the use of simulation studies for designing and analysing studies.

  1. IGESS: a statistical approach to integrating individual-level genotype data and summary statistics in genome-wide association studies.

    PubMed

    Dai, Mingwei; Ming, Jingsi; Cai, Mingxuan; Liu, Jin; Yang, Can; Wan, Xiang; Xu, Zongben

    2017-09-15

    Results from genome-wide association studies (GWAS) suggest that a complex phenotype is often affected by many variants with small effects, known as 'polygenicity'. Tens of thousands of samples are often required to ensure statistical power of identifying these variants with small effects. However, it is often the case that a research group can only get approval for the access to individual-level genotype data with a limited sample size (e.g. a few hundreds or thousands). Meanwhile, summary statistics generated using single-variant-based analysis are becoming publicly available. The sample sizes associated with the summary statistics datasets are usually quite large. How to make the most efficient use of existing abundant data resources largely remains an open question. In this study, we propose a statistical approach, IGESS, to increasing statistical power of identifying risk variants and improving accuracy of risk prediction by i ntegrating individual level ge notype data and s ummary s tatistics. An efficient algorithm based on variational inference is developed to handle the genome-wide analysis. Through comprehensive simulation studies, we demonstrated the advantages of IGESS over the methods which take either individual-level data or summary statistics data as input. We applied IGESS to perform integrative analysis of Crohns Disease from WTCCC and summary statistics from other studies. IGESS was able to significantly increase the statistical power of identifying risk variants and improve the risk prediction accuracy from 63.2% ( ±0.4% ) to 69.4% ( ±0.1% ) using about 240 000 variants. The IGESS software is available at https://github.com/daviddaigithub/IGESS . zbxu@xjtu.edu.cn or xwan@comp.hkbu.edu.hk or eeyang@hkbu.edu.hk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  2. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    PubMed

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  3. Do impression management and self-deception distort self-report measures with content of dynamic risk factors in offender samples? A meta-analytic review.

    PubMed

    Hildebrand, Martin; Wibbelink, Carlijn J M; Verschuere, Bruno

    Self-report measures provide an important source of information in correctional/forensic settings, yet at the same time the validity of that information is often questioned because self-reports are thought to be highly vulnerable to self-presentation biases. Primary studies in offender samples have provided mixed results with regard to the impact of socially desirable responding on self-reports. The main aim of the current study was therefore to investigate-via a meta-analytic review of published studies-the association between the two dimensions of socially desirable responding, impression management and self-deceptive enhancement, and self-report measures with content of dynamic risk factors using the Balanced Inventory of Desirable Responding (BIDR) in offender samples. These self-report measures were significantly and negatively related with self-deception (r = -0.120, p < 0.001; k = 170 effect sizes) and impression management (r = -0.158, p < 0.001; k = 157 effect sizes), yet there was evidence of publication bias for the impression management effect with the trim and fill method indicating that the relation is probably even smaller (r = -0.07). The magnitude of the effect sizes was small. Moderation analyses suggested that type of dynamic risk factor (e.g., antisocial cognition versus antisocial personality), incentives, and publication year affected the relationship between impression management and self-report measures with content of dynamic risk factors, whereas sample size, setting (e.g., incarcerated, community), and publication year influenced the relation between self-deception and these self-report measures. The results indicate that the use of self-report measures to assess dynamic risk factors in correctional/forensic settings is not inevitably compromised by socially desirable responding, yet caution is warranted for some risk factors (antisocial personality traits), particularly when incentives are at play. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Thoracic and respirable particle definitions for human health risk assessment.

    PubMed

    Brown, James S; Gordon, Terry; Price, Owen; Asgharian, Bahman

    2013-04-10

    Particle size-selective sampling refers to the collection of particles of varying sizes that potentially reach and adversely affect specific regions of the respiratory tract. Thoracic and respirable fractions are defined as the fraction of inhaled particles capable of passing beyond the larynx and ciliated airways, respectively, during inhalation. In an attempt to afford greater protection to exposed individuals, current size-selective sampling criteria overestimate the population means of particle penetration into regions of the lower respiratory tract. The purpose of our analyses was to provide estimates of the thoracic and respirable fractions for adults and children during typical activities with both nasal and oral inhalation, that may be used in the design of experimental studies and interpretation of health effects evidence. We estimated the fraction of inhaled particles (0.5-20 μm aerodynamic diameter) penetrating beyond the larynx (based on experimental data) and ciliated airways (based on a mathematical model) for an adult male, adult female, and a 10 yr old child during typical daily activities and breathing patterns. Our estimates show less penetration of coarse particulate matter into the thoracic and gas exchange regions of the respiratory tract than current size-selective criteria. Of the parameters we evaluated, particle penetration into the lower respiratory tract was most dependent on route of breathing. For typical activity levels and breathing habits, we estimated a 50% cut-size for the thoracic fraction at an aerodynamic diameter of around 3 μm in adults and 5 μm in children, whereas current ambient and occupational criteria suggest a 50% cut-size of 10 μm. By design, current size-selective sample criteria overestimate the mass of particles generally expected to penetrate into the lower respiratory tract to provide protection for individuals who may breathe orally. We provide estimates of thoracic and respirable fractions for a variety of breathing habits and activities that may benefit the design of experimental studies and interpretation of particle size-specific health effects.

  5. Thoracic and respirable particle definitions for human health risk assessment

    PubMed Central

    2013-01-01

    Background Particle size-selective sampling refers to the collection of particles of varying sizes that potentially reach and adversely affect specific regions of the respiratory tract. Thoracic and respirable fractions are defined as the fraction of inhaled particles capable of passing beyond the larynx and ciliated airways, respectively, during inhalation. In an attempt to afford greater protection to exposed individuals, current size-selective sampling criteria overestimate the population means of particle penetration into regions of the lower respiratory tract. The purpose of our analyses was to provide estimates of the thoracic and respirable fractions for adults and children during typical activities with both nasal and oral inhalation, that may be used in the design of experimental studies and interpretation of health effects evidence. Methods We estimated the fraction of inhaled particles (0.5-20 μm aerodynamic diameter) penetrating beyond the larynx (based on experimental data) and ciliated airways (based on a mathematical model) for an adult male, adult female, and a 10 yr old child during typical daily activities and breathing patterns. Results Our estimates show less penetration of coarse particulate matter into the thoracic and gas exchange regions of the respiratory tract than current size-selective criteria. Of the parameters we evaluated, particle penetration into the lower respiratory tract was most dependent on route of breathing. For typical activity levels and breathing habits, we estimated a 50% cut-size for the thoracic fraction at an aerodynamic diameter of around 3 μm in adults and 5 μm in children, whereas current ambient and occupational criteria suggest a 50% cut-size of 10 μm. Conclusions By design, current size-selective sample criteria overestimate the mass of particles generally expected to penetrate into the lower respiratory tract to provide protection for individuals who may breathe orally. We provide estimates of thoracic and respirable fractions for a variety of breathing habits and activities that may benefit the design of experimental studies and interpretation of particle size-specific health effects. PMID:23575443

  6. No Evidence for a Relationship Between Hair Testosterone Concentrations and 2D:4D Ratio or Risk Taking

    PubMed Central

    Ronay, Richard; van der Meij, Leander; Oostrom, Janneke K.; Pollet, Thomas V.

    2018-01-01

    Using a recently developed alternative assay procedure to measure hormone levels from hair samples, we examined the relationships between testosterone, cortisol, 2D:4D ratio, overconfidence and risk taking. A total of 162 (53 male) participants provided a 3 cm sample of hair, a scanned image of their right and left hands from which we determined 2D:4D ratios, and completed measures of overconfidence and behavioral risk taking. While our sample size for males was less than ideal, our results revealed no evidence for a relationship between hair testosterone concentrations, 2D:4D ratios and risk taking. No relationships with overconfidence emerged. Partially consistent with the Dual Hormone Hypothesis, we did find evidence for the interacting effect of testosterone and cortisol on risk taking but only in men. Hair testosterone concentrations were positively related to risk taking when levels of hair cortisol concentrations were low, in men. Our results lend support to the suggestion that endogenous testosterone and 2D:4D ratio are unrelated and might then exert diverging activating vs. organizing effects on behavior. Comparing our results to those reported in the existing literature we speculate that behavioral correlates of testosterone such as direct effects on risk taking may be more sensitive to state-based fluctuations than baseline levels of testosterone. PMID:29556180

  7. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Estimation of sample size and testing power (Part 4).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  9. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    PubMed

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  10. The Effect of a Decision Aid on the Quality of Colorectal Cancer Screening Decisions

    DTIC Science & Technology

    2012-03-29

    between the pretest and the posttest mean scores in the educational intervention group [12]. A sample size of 100 in each group was estimated to...usual care (n = 140) or a video about colon cancer and screening options (n = 140). Participants in the intervention group received information on risks...CSPY .............................. 117 Table 12. Values Concordance Scores Associated with Choosing CSPY in Two Study Groups Based on Univariate

  11. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  12. A risk assessment method for multi-site damage

    NASA Astrophysics Data System (ADS)

    Millwater, Harry Russell, Jr.

    This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.

  13. Female Sexual Victimization Among College Students: Assault Severity, Health Risk Behaviors, and Sexual Functioning.

    PubMed

    Turchik, Jessica A; Hassija, Christina M

    2014-09-01

    The purpose of the present study was to examine the relationship between college women's sexual victimization experiences, health risk behaviors, and sexual functioning. A sample of 309 college women at a mid-sized Midwestern university completed measures assessing sexual victimization, sexual risk taking, substance use behaviors, sexual desire, sexual functioning, prior sexual experiences, and social desirability. Severity of sexual victimization was measured using a multi-item, behaviorally specific, gender-neutral measure, which was divided into four categories based on severity (none, sexual contact, sexual coercion, rape). Within the sample, 72.8% (n = 225) of women reported at least one experience of sexual victimization since age 16. Results from MANCOVAs and a multinomial logistic regression, controlling for social desirability and prior sexual experience, revealed that sexual victimization among female students was related to increased drug use, problematic drinking behaviors, sexual risk taking, sexual dysfunction, and dyadic sexual desire. In addition, findings indicated that women exposed to more severe forms of sexual victimization (i.e., rape) were most likely to report these risk-taking behaviors and sexual functioning issues. Implications for sexual assault risk reduction programming and treatment are discussed. © The Author(s) 2014.

  14. Identification of missing variants by combining multiple analytic pipelines.

    PubMed

    Ren, Yingxue; Reddy, Joseph S; Pottier, Cyril; Sarangi, Vivekananda; Tian, Shulan; Sinnwell, Jason P; McDonnell, Shannon K; Biernacka, Joanna M; Carrasquillo, Minerva M; Ross, Owen A; Ertekin-Taner, Nilüfer; Rademakers, Rosa; Hudson, Matthew; Mainzer, Liudmila Sergeevna; Asmann, Yan W

    2018-04-16

    After decades of identifying risk factors using array-based genome-wide association studies (GWAS), genetic research of complex diseases has shifted to sequencing-based rare variants discovery. This requires large sample sizes for statistical power and has brought up questions about whether the current variant calling practices are adequate for large cohorts. It is well-known that there are discrepancies between variants called by different pipelines, and that using a single pipeline always misses true variants exclusively identifiable by other pipelines. Nonetheless, it is common practice today to call variants by one pipeline due to computational cost and assume that false negative calls are a small percent of total. We analyzed 10,000 exomes from the Alzheimer's Disease Sequencing Project (ADSP) using multiple analytic pipelines consisting of different read aligners and variant calling strategies. We compared variants identified by using two aligners in 50,100, 200, 500, 1000, and 1952 samples; and compared variants identified by adding single-sample genotyping to the default multi-sample joint genotyping in 50,100, 500, 2000, 5000 and 10,000 samples. We found that using a single pipeline missed increasing numbers of high-quality variants correlated with sample sizes. By combining two read aligners and two variant calling strategies, we rescued 30% of pass-QC variants at sample size of 2000, and 56% at 10,000 samples. The rescued variants had higher proportions of low frequency (minor allele frequency [MAF] 1-5%) and rare (MAF < 1%) variants, which are the very type of variants of interest. In 660 Alzheimer's disease cases with earlier onset ages of ≤65, 4 out of 13 (31%) previously-published rare pathogenic and protective mutations in APP, PSEN1, and PSEN2 genes were undetected by the default one-pipeline approach but recovered by the multi-pipeline approach. Identification of the complete variant set from sequencing data is the prerequisite of genetic association analyses. The current analytic practice of calling genetic variants from sequencing data using a single bioinformatics pipeline is no longer adequate with the increasingly large projects. The number and percentage of quality variants that passed quality filters but are missed by the one-pipeline approach rapidly increased with sample size.

  15. Reexamining Sample Size Requirements for Multivariate, Abundance-Based Community Research: When Resources are Limited, the Research Does Not Have to Be.

    PubMed

    Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F

    2015-01-01

    Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.

  16. Prevalence and Associated Risk Factors of Dyslexic Children in a Middle-Sized City of China: A Cross-Sectional Study

    PubMed Central

    Zhang, Jiajia; Mo, Shengnan; Shao, Shanshan; Zhong, Rong; Ke, Juntao; Lu, Xuzai; Miao, Xiaoping; Song, Ranran

    2013-01-01

    Background There are many discussions about dyslexia based on studies conducted in western countries, and some risk factors to dyslexia, such as gender and home literacy environment, have been widely accepted based on these studies. However, to our knowledge, there are few studies focusing on the risk factors of dyslexia in China. Therefore, the aim of our study was to investigate the prevalence of dyslexia and its potential risk factors. Methods A cross-sectional study was conducted in Qianjiang, a city in Hubei province, China. Two stages sampling strategy was applied to randomly selected 5 districts and 9 primary schools in Qianjiang. In total, 6,350 students participated in this study and there were 5,063 valid student questionnaires obtained for the final analyses. Additional questionnaires (such as Dyslexia Checklist for Chinese Children and Pupil Rating Scale) were used to identify dyslexic children. The chi-square test and multivariate logistic regression were employed to reveal the potential risk factors to dyslexia. Results Our study revealed that the prevalence of dyslexia was 3.9% in Qianjiang city, which is a middle-sized city in China. Among dyslexic children, the gender ratio (boys to girls) was nearly 3∶1. According to the P-value in the multivariate logistic regression, the gender (P<0.01), mother's education level (P<0.01), and learning habits (P<0.01) (active learning, scheduled reading time) were associated with dyslexia. Conclusion The prevalence rate of dyslexic children in middle-sized cities is 3.9%. The potential risk factors of dyslexic children revealed in this study will have a great impact on detecting and treating dyslexic children in China as early as possible, although more studies are still needed to further investigate the risk factors of dyslexic children in China. PMID:23457604

  17. What controls the maximum magnitude of injection-induced earthquakes?

    NASA Astrophysics Data System (ADS)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum plausible magnitude would clearly be beneficial for quantitative risk assessment of injection-induced seismicity.

  18. Characterizing the genetic influences on risk aversion.

    PubMed

    Harrati, Amal

    2014-01-01

    Risk aversion has long been cited as an important factor in retirement decisions, investment behavior, and health. Some of the heterogeneity in individual risk tolerance is well understood, reflecting age gradients, wealth gradients, and similar effects, but much remains unexplained. This study explores genetic contributions to heterogeneity in risk aversion among older Americans. Using over 2 million genetic markers per individual from the U.S. Health and Retirement Study, I report results from a genome-wide association study (GWAS) on risk preferences using a sample of 10,455 adults. None of the single-nucleotide polymorphisms (SNPs) are found to be statistically significant determinants of risk preferences at levels stricter than 5 × 10(-8). These results suggest that risk aversion is a complex trait that is highly polygenic. The analysis leads to upper bounds on the number of genetic effects that could exceed certain thresholds of significance and still remain undetected at the current sample size. The findings suggest that the known heritability in risk aversion is likely to be driven by large numbers of genetic variants, each with a small effect size.

  19. Distress Tolerance and Social Support in Adolescence: Predicting Risk for Internalizing and Externalizing Symptoms Following a Natural Disaster

    PubMed Central

    Cohen, Joseph R.; Danielson, Carla Kmett; Adams, Zachary W.; Ruggiero, Kenneth J.

    2016-01-01

    The purpose of the multi-measure, multi-wave, longitudinal study was to examine the interactive relation between behavioral distress tolerance (DT) and perceived social support (PSS) in 352 tornado-exposed adolescents aged 12–17 years (M=14.44; SD=1.74). At baseline, adolescents completed a computer-based task for DT, and self-report measures of PSS, depressed mood, posttraumatic stress disorder (PTSD), substance use, and interpersonal conflict. Symptoms also were assessed 4 and 12 months after baseline. Findings showed that lower levels of DT together with lower levels of PSS conferred risk for elevated symptoms of prospective depression (t(262)= −2.04, p=.04; reffect size=0.13) and PTSD (t(195)= −2.08, p=.04; reffect size=0.15) following a tornado. However, only PSS was significant in substance use t(139)=2.20, p=.03; reffect size=0.18) and conflict (t(138)=−4.05, p<.0001; reffect size=0.33) in our sample. Implications regarding adolescent DT, the transdiagnostic nature of PSS, and the clinical applications of our findings in the aftermath of a natural disaster are discussed. PMID:28163364

  20. Sleep disturbances as an evidence-based suicide risk factor.

    PubMed

    Bernert, Rebecca A; Kim, Joanne S; Iwata, Naomi G; Perlis, Michael L

    2015-03-01

    Increasing research indicates that sleep disturbances may confer increased risk for suicidal behaviors, including suicidal ideation, suicide attempts, and death by suicide. Despite increased investigation, a number of methodological problems present important limitations to the validity and generalizability of findings in this area, which warrant additional focus. To evaluate and delineate sleep disturbances as an evidence-based suicide risk factor, a systematic review of the extant literature was conducted with methodological considerations as a central focus. The following methodologic criteria were required for inclusion: the report (1) evaluated an index of sleep disturbance; (2) examined an outcome measure for suicidal behavior; (3) adjusted for presence of a depression diagnosis or depression severity, as a covariate; and (4) represented an original investigation as opposed to a chart review. Reports meeting inclusion criteria were further classified and reviewed according to: study design and timeframe; sample type and size; sleep disturbance, suicide risk, and depression covariate assessment measure(s); and presence of positive versus negative findings. Based on keyword search, the following search engines were used: PubMed and PsycINFO. Search criteria generated N = 82 articles representing original investigations focused on sleep disturbances and suicide outcomes. Of these, N = 18 met inclusion criteria for review based on systematic analysis. Of the reports identified, N = 18 evaluated insomnia or poor sleep quality symptoms, whereas N = 8 assessed nightmares in association with suicide risk. Despite considerable differences in study designs, samples, and assessment techniques, the comparison of such reports indicates preliminary, converging evidence for sleep disturbances as an empirical risk factor for suicidal behaviors, while highlighting important, future directions for increased investigation.

  1. The prevalence of metabolic syndrome in postmenopausal women: A systematic review and meta-analysis in Iran.

    PubMed

    Ebtekar, Fariba; Dalvand, Sahar; Gheshlagh, Reza Ghanei

    2018-06-06

    Metabolic syndrome is a set of cardiovascular risk factors that increase the risk of cardiovascular disease, diabetes and mortality. Women are at risk of developing metabolic syndrome as they enter the postmenopausal period. The present systematic review and meta-analysis was conducted to estimate the prevalence of metabolic syndrome in Iranian postmenopausal women. In this systematic review and meta-analysis, 16 national articles published in Persian and English were gathered without time limit. National databases such as SIDs, IranMedex and MagIran, and international databases such as Web of Science, Google Scholar, PubMed and Scopus were used to search the relevant studies. We searched for articles using the keywords "menopause", "postmenopausal", "metabolic syndrome", "MetSyn", and their combinations. Data were analyzed using the meta-analysis method and the random effects model. Analysis of 16 selected articles with a sample size of 5893 people showed that the prevalence of metabolic syndrome in Iranian postmenopausal women was 51.6% (95% CI: 43-60). The prevalence of metabolic syndrome based on ATP III and IDF criteria was 54% (95% CI: 59-63) and 50% (95% CI: 45-56), respectively. Based on the results of univariate meta-regression analysis, the increase in the mean age of postmenopausal women (p = 0.001) and sample size (p = 0.029), the prevalence of metabolic syndrome increased significantly. More than half of postmenopausal women in Iran suffer from metabolic syndrome. Providing training programs for postmenopausal women to prevent and control cardiovascular disease and its complications seems to be necessary. Copyright © 2018. Published by Elsevier Ltd.

  2. Effect of Sampling Plans on the Risk of Escherichia coli O157 Illness.

    PubMed

    Kiermeier, Andreas; Sumner, John; Jenson, Ian

    2015-07-01

    Australia exports about 150,000 to 200,000 tons of manufacturing beef to the United States annually. Each lot is tested for Escherichia coli O157 using the N-60 sampling protocol, where 60 small pieces of surface meat from each lot of production are tested. A risk assessment of E. coli O157 illness from the consumption of hamburgers made from Australian manufacturing meat formed the basis to evaluate the effect of sample size and amount on the number of illnesses predicted. The sampling plans evaluated included no sampling (resulting in an estimated 55.2 illnesses per annum), the current N-60 plan (50.2 illnesses), N-90 (49.6 illnesses), N-120 (48.4 illnesses), and a more stringent N-60 sampling plan taking five 25-g samples from each of 12 cartons (47.4 illnesses per annum). While sampling may detect some highly contaminated lots, it does not guarantee that all such lots are removed from commerce. It is concluded that increasing the sample size or sample amount from the current N-60 plan would have a very small public health effect.

  3. Grain Size Distribution and Health Risk Assessment of Metals in Outdoor Dust in Chengdu, Southwestern China.

    PubMed

    Chen, Mengqin; Pi, Lu; Luo, Yan; Geng, Meng; Hu, Wenli; Li, Zhi; Su, Shijun; Gan, Zhiwei; Ding, Sanglan

    2016-04-01

    A total of 27 outdoor dust samples from roads, parks, and high spots were collected and analyzed to investigate the contamination of 11 metals (Cr, Mn, Co, Ni, Cu, Zn, As, Sr, Cd, Sb, and Pb) in Chengdu, China. The results showed that the samples from the high spots exhibited the highest heavy metal level compared with those from the roads and the parks, except for Ni, Cu, and Pb. The dust was classified into five grain size fractions. The mean loads of each grain size fraction of 11 determined metals displayed similar distribution, and the contribution of median size (63-125, 125-250, 250-500 μm) fractions accounted for more than 70% of overall heavy metal loads. The health risk posed by the determined metals to human via dust ingestion, dermal contact, and inhalation was investigated. Oral and respiratory bioaccessible parts of the metals in dust were extracted using simulated stomach solution and composite lung serum. The mean bioaccessibilities of 11 investigated metals in the gastric solution were much higher than those in the composite lung serum, especially Zn, Cd, and Pb. Ingestion was the most important exposure pathway with percentage greater than 70% for both children and adults. Risk evaluation results illustrated that children in Chengdu might suffer noncarcinogenic risk when exposed to outdoor dust. Given that the cancer risk values of Pb and Cr larger than 1 × 10(-4), potential carcinogenic risk might occur for Chengdu residents through outdoor dust intake.

  4. Stalkers and harassers of British royalty: an exploration of proxy behaviours for violence.

    PubMed

    James, David V; Mullen, Paul E; Meloy, J Reid; Pathé, Michele T; Preston, Lulu; Darnley, Brian; Farnham, Frank R; Scalora, Mario J

    2011-01-01

    Study of risk factors for violence to prominent people is difficult because of low base rates. This study of harassers of the royal family examined factors suggested in the literature as proxies for violence--breaching security barriers, achieving proximity, approach with a weapon, and approach with homicidal ideation. A stratified sample of different types of approach behaviour was randomly extracted from 2,332 Royalty Protection Police files, which had been divided into behavioural types. The final sample size was 275. Significant differences in illness symptomatology and motivation were found for each proxy group. Querulants were significantly over-represented in three of the four groups. There was generally little overlap between the proxy groups. There is no evidence of the proxy items examined being part of a "pathway to violence". Different motivations may be associated with different patterns of risk. Risk assessment must incorporate knowledge of the interactions between motivation, mental state, and behaviour. Copyright © 2010 John Wiley & Sons, Ltd.

  5. Designing trials for pressure ulcer risk assessment research: methodological challenges.

    PubMed

    Balzer, K; Köpke, S; Lühmann, D; Haastert, B; Kottner, J; Meyer, G

    2013-08-01

    For decades various pressure ulcer risk assessment scales (PURAS) have been developed and implemented into nursing practice despite uncertainty whether use of these tools helps to prevent pressure ulcers. According to current methodological standards, randomised controlled trials (RCTs) are required to conclusively determine the clinical efficacy and safety of this risk assessment strategy. In these trials, PURAS-aided risk assessment has to be compared to nurses' clinical judgment alone in terms of its impact on pressure ulcer incidence and adverse outcomes. However, RCTs evaluating diagnostic procedures are prone to specific risks of bias and threats to the statistical power which may challenge their validity and feasibility. This discussion paper critically reflects on the rigour and feasibility of experimental research needed to substantiate the clinical efficacy of PURAS-aided risk assessment. Based on reflections of the methodological literature, a critical appraisal of available trials on this subject and an analysis of a protocol developed for a methodologically robust cluster-RCT, this paper arrives at the following conclusions: First, available trials do not provide reliable estimates of the impact of PURAS-aided risk assessment on pressure ulcer incidence compared to nurses' clinical judgement alone due to serious risks of bias and insufficient sample size. Second, it seems infeasible to assess this impact by means of rigorous experimental studies since sample size would become extremely high if likely threats to validity and power are properly taken into account. Third, means of evidence linkages seem to currently be the most promising approaches for evaluating the clinical efficacy and safety of PURAS-aided risk assessment. With this kind of secondary research, the downstream effect of use of PURAS on pressure ulcer incidence could be modelled by combining best available evidence for single parts of this pathway. However, to yield reliable modelling results, more robust experimental research evaluating specific parts of the pressure ulcer risk assessment-prevention pathway is needed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Analytical approaches for the characterization and quantification of nanoparticles in food and beverages.

    PubMed

    Mattarozzi, Monica; Suman, Michele; Cascio, Claudia; Calestani, Davide; Weigel, Stefan; Undas, Anna; Peters, Ruud

    2017-01-01

    Estimating consumer exposure to nanomaterials (NMs) in food products and predicting their toxicological properties are necessary steps in the assessment of the risks of this technology. To this end, analytical methods have to be available to detect, characterize and quantify NMs in food and materials related to food, e.g. food packaging and biological samples following metabolization of food. The challenge for the analytical sciences is that the characterization of NMs requires chemical as well as physical information. This article offers a comprehensive analysis of methods available for the detection and characterization of NMs in food and related products. Special attention was paid to the crucial role of sample preparation methods since these have been partially neglected in the scientific literature so far. The currently available instrumental methods are grouped as fractionation, counting and ensemble methods, and their advantages and limitations are discussed. We conclude that much progress has been made over the last 5 years but that many challenges still exist. Future perspectives and priority research needs are pointed out. Graphical Abstract Two possible analytical strategies for the sizing and quantification of Nanoparticles: Asymmetric Flow Field-Flow Fractionation with multiple detectors (allows the determination of true size and mass-based particle size distribution); Single Particle Inductively Coupled Plasma Mass Spectrometry (allows the determination of a spherical equivalent diameter of the particle and a number-based particle size distribution).

  7. PSP toxin levels and plankton community composition and abundance in size-fractionated vertical profiles during spring/summer blooms of the toxic dinoflagellate Alexandrium fundyense in the Gulf of Maine and on Georges Bank, 2007, 2008, and 2010: 2. Plankton community composition and abundance.

    PubMed

    Petitpas, Christian M; Turner, Jefferson T; Deeds, Jonathan R; Keafer, Bruce A; McGillicuddy, Dennis J; Milligan, Peter J; Shue, Vangie; White, Kevin D; Anderson, Donald M

    2014-05-01

    As part of the Gulf of Maine Toxicity (GOMTOX) project, we determined Alexandrium fundyense abundance, paralytic shellfish poisoning (PSP) toxin levels in various plankton size fractions, and the community composition of potential grazers of A. fundyense in plankton size fractions during blooms of this toxic dinoflagellate in the coastal Gulf of Maine and on Georges Bank in spring and summer of 2007, 2008, and 2010. PSP toxins and A. fundyense cells were found throughout the sampled water column (down to 50 m) in the 20-64 μm size fractions. While PSP toxins were widespread throughout all size classes of the zooplankton grazing community, the majority of the toxin was measured in the 20-64 μm size fraction. A. fundyense cellular toxin content estimated from field samples was significantly higher in the coastal Gulf of Maine than on Georges Bank. Most samples containing PSP toxins in the present study had diverse assemblages of grazers. However, some samples clearly suggested PSP toxin accumulation in several different grazer taxa including tintinnids, heterotrophic dinoflagellates of the genus Protoperidinium , barnacle nauplii, the harpacticoid copepod Microsetella norvegica , the calanoid copepods Calanus finmarchicus and Pseudocalanus spp., the marine cladoceran Evadne nordmanni , and hydroids of the genus Clytia . Thus, a diverse assemblage of zooplankton grazers accumulated PSP toxins through food-web interactions. This raises the question of whether PSP toxins pose a potential human health risk not only from nearshore bivalve shellfish, but also potentially from fish and other upper-level consumers in zooplankton-based pelagic food webs.

  8. Application of laboratory reflectance spectroscopy to target and map expansive soils: example of the western Loiret, France

    NASA Astrophysics Data System (ADS)

    Hohmann, Audrey; Dufréchou, Grégory; Grandjean, Gilles; Bourguignon, Anne

    2014-05-01

    Swelling soils contain clay minerals that change volume with water content and cause extensive and expensive damage on infrastructures. Based on spatial distribution of infrastructure damages and existing geological maps, the Bureau de Recherches Géologiques et Minières (BRGM, i.e. the French Geological Survey) published in 2010 a 1:50 000 swelling hazard map of France, indexing the territory to low, moderate, or high swelling risk. This study aims to use SWIR (1100-2500 nm) reflectance spectra of soils acquired under laboratory controlled conditions to estimate the swelling potential of soils and improve the swelling risk map of France. 332 samples were collected at the W of Orléans (France) in various geological formations and swelling risk areas. Comparisons of swelling potential of soil samples and swelling risk areas of the map show several inconsistent associations that confirm the necessity to redraw the actual swelling risk map of France. New swelling risk maps of the sampling area were produce from soil samples using three interpolation methods. Maps produce using kriging and Natural neighbour interpolation methods did not permit to show discrete lithological units, introduced unsupported swelling risk zones, and did not appear useful to refine swelling risk map of France. Voronoi polygon was also used to produce map where swelling potential estimated from each samples were extrapolated to a polygon and all polygons were thus supported by field information. From methods tested here, Voronoi polygon appears thus the most adapted method to produce expansive soils maps. However, size of polygon is highly dependent of the samples spacing and samples may not be representative of the entire polygon. More samples are thus needed to provide reliable map at the scale of the sampling area. Soils were also sampled along two sections with a sampling interval of ca. 260 m and ca. 50 m. Sample interval of 50 m appears more adapted for mapping of smallest lithological units. The presence of several samples close to themselves indicating the same swelling potential is a good indication of the presence of a zone with constant swelling potential. Combination of Voronoi method and sampling interval of ca. 50 m appear adapted to produce local swelling potential maps in areas where doubt remain or where infrastructure damages attributed to expansive soils are knew.

  9. Developing a novel risk prediction model for severe malarial anemia.

    PubMed

    Brickley, E B; Kabyemela, E; Kurtis, J D; Fried, M; Wood, A M; Duffy, P E

    2017-01-01

    As a pilot study to investigate whether personalized medicine approaches could have value for the reduction of malaria-related mortality in young children, we evaluated questionnaire and biomarker data collected from the Mother Offspring Malaria Study Project birth cohort (Muheza, Tanzania, 2002-2006) at the time of delivery as potential prognostic markers for pediatric severe malarial anemia. Severe malarial anemia, defined here as a Plasmodium falciparum infection accompanied by hemoglobin levels below 50 g/L, is a key manifestation of life-threatening malaria in high transmission regions. For this study sample, a prediction model incorporating cord blood levels of interleukin-1β provided the strongest discrimination of severe malarial anemia risk with a C-index of 0.77 (95% CI 0.70-0.84), whereas a pragmatic model based on sex, gravidity, transmission season at delivery, and bed net possession yielded a more modest C-index of 0.63 (95% CI 0.54-0.71). Although additional studies, ideally incorporating larger sample sizes and higher event per predictor ratios, are needed to externally validate these prediction models, the findings provide proof of concept that risk score-based screening programs could be developed to avert severe malaria cases in early childhood.

  10. Exposure to fluoride in drinking water and hip fracture risk: a meta-analysis of observational studies.

    PubMed

    Yin, Xin-Hai; Huang, Guang-Lei; Lin, Du-Ren; Wan, Cheng-Cheng; Wang, Ya-Dong; Song, Ju-Kun; Xu, Ping

    2015-01-01

    Many observational studies have shown that exposure to fluoride in drinking water is associated with hip fracture risk. However, the findings are varied or even contradictory. In this work, we performed a meta-analysis to assess the relationship between fluoride exposure and hip fracture risk. PubMed and EMBASE databases were searched to identify relevant observational studies from the time of inception until March 2014 without restrictions. Data from the included studies were extracted and analyzed by two authors. Summary relative risks (RRs) with corresponding 95% confidence intervals (CIs) were pooled using random- or fixed-effects models as appropriate. Sensitivity analyses and meta-regression were conducted to explore possible explanations for heterogeneity. Finally, publication bias was assessed. Fourteen observational studies involving thirteen cohort studies and one case-control study were included in the meta-analysis. Exposure to fluoride in drinking water does not significantly increase the incidence of hip fracture (RRs, 1.05; 95% CIs, 0.96-1.15). Sensitivity analyses based on adjustment for covariates, effect measure, country, sex, sample size, quality of Newcastle-Ottawa Scale scores, and follow-up period validated the strength of the results. Meta-regression showed that country, gender, quality of Newcastle-Ottawa Scale scores, adjustment for covariates and sample size were not sources of heterogeneity. Little evidence of publication bias was observed. The present meta-analysis suggests that chronic fluoride exposure from drinking water does not significantly increase the risk of hip fracture. Given the potential confounding factors and exposure misclassification, further large-scale, high-quality studies are needed to evaluate the association between exposure to fluoride in drinking water and hip fracture risk.

  11. Exposure to Fluoride in Drinking Water and Hip Fracture Risk: A Meta-Analysis of Observational Studies

    PubMed Central

    Yin, Xin-Hai; Huang, Guang-Lei; Lin, Du-Ren; Wan, Cheng-Cheng; Wang, Ya-Dong; Song, Ju-Kun; Xu, Ping

    2015-01-01

    Background Many observational studies have shown that exposure to fluoride in drinking water is associated with hip fracture risk. However, the findings are varied or even contradictory. In this work, we performed a meta-analysis to assess the relationship between fluoride exposure and hip fracture risk. Methods PubMed and EMBASE databases were searched to identify relevant observational studies from the time of inception until March 2014 without restrictions. Data from the included studies were extracted and analyzed by two authors. Summary relative risks (RRs) with corresponding 95% confidence intervals (CIs) were pooled using random- or fixed-effects models as appropriate. Sensitivity analyses and meta-regression were conducted to explore possible explanations for heterogeneity. Finally, publication bias was assessed. Results Fourteen observational studies involving thirteen cohort studies and one case-control study were included in the meta-analysis. Exposure to fluoride in drinking water does not significantly increase the incidence of hip fracture (RRs, 1.05; 95% CIs, 0.96–1.15). Sensitivity analyses based on adjustment for covariates, effect measure, country, sex, sample size, quality of Newcastle–Ottawa Scale scores, and follow-up period validated the strength of the results. Meta-regression showed that country, gender, quality of Newcastle–Ottawa Scale scores, adjustment for covariates and sample size were not sources of heterogeneity. Little evidence of publication bias was observed. Conclusion The present meta-analysis suggests that chronic fluoride exposure from drinking water does not significantly increase the risk of hip fracture. Given the potential confounding factors and exposure misclassification, further large-scale, high-quality studies are needed to evaluate the association between exposure to fluoride in drinking water and hip fracture risk. PMID:26020536

  12. Proposal for a new risk stratification classification for meningioma based on patient age, WHO tumor grade, size, localization, and karyotype

    PubMed Central

    Domingues, Patrícia Henriques; Sousa, Pablo; Otero, Álvaro; Gonçalves, Jesus Maria; Ruiz, Laura; de Oliveira, Catarina; Lopes, Maria Celeste; Orfao, Alberto; Tabernero, Maria Dolores

    2014-01-01

    Background Tumor recurrence remains the major clinical complication of meningiomas, the majority of recurrences occurring among WHO grade I/benign tumors. In the present study, we propose a new scoring system for the prognostic stratification of meningioma patients based on analysis of a large series of meningiomas followed for a median of >5 years. Methods Tumor cytogenetics were systematically investigated by interphase fluorescence in situ hybridization in 302 meningioma samples, and the proposed classification was further validated in an independent series of cases (n = 132) analyzed by high-density (500K) single-nucleotide polymorphism (SNP) arrays. Results Overall, we found an adverse impact on patient relapse-free survival (RFS) for males, presence of brain edema, younger patients (<55 years), tumor size >50 mm, tumor localization at intraventricular and anterior cranial base areas, WHO grade II/III meningiomas, and complex karyotypes; the latter 5 variables showed an independent predictive value in multivariate analysis. Based on these parameters, a prognostic score was established for each individual case, and patients were stratified into 4 risk categories with significantly different (P < .001) outcomes. These included a good prognosis group, consisting of approximately 20% of cases, that showed a RFS of 100% ± 0% at 10 years and a very poor-prognosis group with a RFS rate of 0% ± 0% at 10 years. The prognostic impact of the scoring system proposed here was also retained when WHO grade I cases were considered separately (P < .001). Conclusions Based on this risk-stratification classification, different strategies may be adopted for follow-up, and eventually also for treatment, of meningioma patients at different risks for relapse. PMID:24536048

  13. HIV Risk Perception, HIV Knowledge, and Sexual Risk Behaviors among Transgender Women in South Florida.

    PubMed

    De Santis, Joseph P; Hauglum, Shayne D; Deleon, Diego A; Provencio-Vasquez, Elias; Rodriguez, Allan E

    2017-05-01

    Transgender women experience a variety of factors that may contribute to HIV risk. The purpose of this study was to explore links among HIV risk perception, knowledge, and sexual risk behaviors of transgender women. A descriptive, correlational study design was used. Fifty transgender women from the South Florida area were enrolled in the study. Transgender women completed a demographic questionnaire and standardized instruments measuring HIV risk perception, knowledge, and sexual risk behaviors. Transgender women reported low levels of HIV risk perception, and had knowledge deficits regarding HIV risk/transmission. Some participants engaged in high-risk sexual behaviors. Predictors of sexual risk behaviors among transgender women were identified. More research is needed with a larger sample size to continue studying factors that contribute to sexual risk behaviors in the understudied population of transgender women. Evidence-based guidelines are available to assist public health nurses in providing care for transgender women. Nurses must assess HIV perception risk and HIV knowledge and provide relevant education to transgender women on ways to minimize sexual risk. © 2016 Wiley Periodicals, Inc.

  14. Mercury in fishes from Wrangell-St. Elias National Park and Preserve, Alaska

    USGS Publications Warehouse

    Kowalski, Brandon M.; Willacker, James J.; Zimmerman, Christian E.; Eagles-Smith, Collin A.

    2014-01-01

    In this study, mercury (Hg) concentrations were examined in fishes from Wrangell-St. Elias National Park and Preserve, Alaska, the largest and one of the most remote units in the national park system. The goals of the study were to (1) examine the distribution of Hg in select lakes of Wrangell-St. Elias National Park and Preserve; (2) evaluate the differences in Hg concentrations among fish species and with fish age and size; and (3) assess the potential ecological risks of Hg to park fishes, wildlife, and human consumers by comparing Hg concentrations to a series of risk benchmarks. Total Hg concentrations ranged from 17.9 to 616.4 nanograms per gram wet weight (ng/g ww), with a mean (± standard error) of 180.0 ±17.9 across the 83 individuals sampled. Without accounting for the effects of size, Hg concentrations varied by a factor of 10.9 across sites and species. After accounting for the effects of size, Hg concentrations were even more variable, differing by a factor of as much as 13.2 within a single species sampled from two lakes. Such inter-site variation suggests that site characteristics play an important role in determining fish Hg concentrations and that more intensive sampling may be necessary to adequately characterize Hg contamination in the park. Size-normalized Hg concentrations also differed among three species sampled from Tanada Lake, and Hg concentrations were strongly correlated with age. Furthermore, potential risks to park fish, wildlife, and human users were variable across lakes and species. Although no fish from two of the lakes studied (Grizzly Lake and Summit Lake) had Hg concentrations exceeding any of the benchmarks used, concentrations in Copper Lake and Tanada Lake exceeded conservative benchmarks for bird (90 ng/g ww in whole-body) and human (150 ng/g ww in muscle) consumption. In Tanada Lake, concentrations in most fishes also exceeded benchmarks for risk to moderate- and low-sensitivity avian consumers (180 and 270 ng/g ww in whole-body, respectively), as well as the concentration at which Alaska State guidelines suggest at-risk groups limit fish consumption to 3 meals per week (320 ng/g). However, the relationship between Hg concentrations and fish size in Tanada Lake suggests that consumption of smaller-sized fishes could reduce Hg exposure in human consumers.

  15. Authorization of Animal Experiments Is Based on Confidence Rather than Evidence of Scientific Rigor

    PubMed Central

    Nathues, Christina; Würbel, Hanno

    2016-01-01

    Accumulating evidence indicates high risk of bias in preclinical animal research, questioning the scientific validity and reproducibility of published research findings. Systematic reviews found low rates of reporting of measures against risks of bias in the published literature (e.g., randomization, blinding, sample size calculation) and a correlation between low reporting rates and inflated treatment effects. That most animal research undergoes peer review or ethical review would offer the possibility to detect risks of bias at an earlier stage, before the research has been conducted. For example, in Switzerland, animal experiments are licensed based on a detailed description of the study protocol and a harm–benefit analysis. We therefore screened applications for animal experiments submitted to Swiss authorities (n = 1,277) for the rates at which the use of seven basic measures against bias (allocation concealment, blinding, randomization, sample size calculation, inclusion/exclusion criteria, primary outcome variable, and statistical analysis plan) were described and compared them with the reporting rates of the same measures in a representative sub-sample of publications (n = 50) resulting from studies described in these applications. Measures against bias were described at very low rates, ranging on average from 2.4% for statistical analysis plan to 19% for primary outcome variable in applications for animal experiments, and from 0.0% for sample size calculation to 34% for statistical analysis plan in publications from these experiments. Calculating an internal validity score (IVS) based on the proportion of the seven measures against bias, we found a weak positive correlation between the IVS of applications and that of publications (Spearman’s rho = 0.34, p = 0.014), indicating that the rates of description of these measures in applications partly predict their rates of reporting in publications. These results indicate that the authorities licensing animal experiments are lacking important information about experimental conduct that determines the scientific validity of the findings, which may be critical for the weight attributed to the benefit of the research in the harm–benefit analysis. Similar to manuscripts getting accepted for publication despite poor reporting of measures against bias, applications for animal experiments may often be approved based on implicit confidence rather than explicit evidence of scientific rigor. Our findings shed serious doubt on the current authorization procedure for animal experiments, as well as the peer-review process for scientific publications, which in the long run may undermine the credibility of research. Developing existing authorization procedures that are already in place in many countries towards a preregistration system for animal research is one promising way to reform the system. This would not only benefit the scientific validity of findings from animal experiments but also help to avoid unnecessary harm to animals for inconclusive research. PMID:27911892

  16. Authorization of Animal Experiments Is Based on Confidence Rather than Evidence of Scientific Rigor.

    PubMed

    Vogt, Lucile; Reichlin, Thomas S; Nathues, Christina; Würbel, Hanno

    2016-12-01

    Accumulating evidence indicates high risk of bias in preclinical animal research, questioning the scientific validity and reproducibility of published research findings. Systematic reviews found low rates of reporting of measures against risks of bias in the published literature (e.g., randomization, blinding, sample size calculation) and a correlation between low reporting rates and inflated treatment effects. That most animal research undergoes peer review or ethical review would offer the possibility to detect risks of bias at an earlier stage, before the research has been conducted. For example, in Switzerland, animal experiments are licensed based on a detailed description of the study protocol and a harm-benefit analysis. We therefore screened applications for animal experiments submitted to Swiss authorities (n = 1,277) for the rates at which the use of seven basic measures against bias (allocation concealment, blinding, randomization, sample size calculation, inclusion/exclusion criteria, primary outcome variable, and statistical analysis plan) were described and compared them with the reporting rates of the same measures in a representative sub-sample of publications (n = 50) resulting from studies described in these applications. Measures against bias were described at very low rates, ranging on average from 2.4% for statistical analysis plan to 19% for primary outcome variable in applications for animal experiments, and from 0.0% for sample size calculation to 34% for statistical analysis plan in publications from these experiments. Calculating an internal validity score (IVS) based on the proportion of the seven measures against bias, we found a weak positive correlation between the IVS of applications and that of publications (Spearman's rho = 0.34, p = 0.014), indicating that the rates of description of these measures in applications partly predict their rates of reporting in publications. These results indicate that the authorities licensing animal experiments are lacking important information about experimental conduct that determines the scientific validity of the findings, which may be critical for the weight attributed to the benefit of the research in the harm-benefit analysis. Similar to manuscripts getting accepted for publication despite poor reporting of measures against bias, applications for animal experiments may often be approved based on implicit confidence rather than explicit evidence of scientific rigor. Our findings shed serious doubt on the current authorization procedure for animal experiments, as well as the peer-review process for scientific publications, which in the long run may undermine the credibility of research. Developing existing authorization procedures that are already in place in many countries towards a preregistration system for animal research is one promising way to reform the system. This would not only benefit the scientific validity of findings from animal experiments but also help to avoid unnecessary harm to animals for inconclusive research.

  17. Validation of a multimarker model for assessing risk of type 2 diabetes from a five-year prospective study of 6784 Danish people (Inter99).

    PubMed

    Urdea, Mickey; Kolberg, Janice; Wilber, Judith; Gerwien, Robert; Moler, Edward; Rowe, Michael; Jorgensen, Paul; Hansen, Torben; Pedersen, Oluf; Jørgensen, Torben; Borch-Johnsen, Knut

    2009-07-01

    Improved identification of subjects at high risk for development of type 2 diabetes would allow preventive interventions to be targeted toward individuals most likely to benefit. In previous research, predictive biomarkers were identified and used to develop multivariate models to assess an individual's risk of developing diabetes. Here we describe the training and validation of the PreDx Diabetes Risk Score (DRS) model in a clinical laboratory setting using baseline serum samples from subjects in the Inter99 cohort, a population-based primary prevention study of cardiovascular disease. Among 6784 subjects free of diabetes at baseline, 215 subjects progressed to diabetes (converters) during five years of follow-up. A nested case-control study was performed using serum samples from 202 converters and 597 randomly selected nonconverters. Samples were randomly assigned to equally sized training and validation sets. Seven biomarkers were measured using assays developed for use in a clinical reference laboratory. The PreDx DRS model performed better on the training set (area under the curve [AUC] = 0.837) than fasting plasma glucose alone (AUC = 0.779). When applied to the sequestered validation set, the PreDx DRS showed the same performance (AUC = 0.838), thus validating the model. This model had a better AUC than any other single measure from a fasting sample. Moreover, the model provided further risk stratification among high-risk subpopulations with impaired fasting glucose or metabolic syndrome. The PreDx DRS provides the absolute risk of diabetes conversion in five years for subjects identified to be "at risk" using the clinical factors. Copyright 2009 Diabetes Technology Society.

  18. Measures of precision for dissimilarity-based multivariate analysis of ecological communities

    PubMed Central

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. PMID:25438826

  19. A post hoc evaluation of a sample size re-estimation in the Secondary Prevention of Small Subcortical Strokes study.

    PubMed

    McClure, Leslie A; Szychowski, Jeff M; Benavente, Oscar; Hart, Robert G; Coffey, Christopher S

    2016-10-01

    The use of adaptive designs has been increasing in randomized clinical trials. Sample size re-estimation is a type of adaptation in which nuisance parameters are estimated at an interim point in the trial and the sample size re-computed based on these estimates. The Secondary Prevention of Small Subcortical Strokes study was a randomized clinical trial assessing the impact of single- versus dual-antiplatelet therapy and control of systolic blood pressure to a higher (130-149 mmHg) versus lower (<130 mmHg) target on recurrent stroke risk in a two-by-two factorial design. A sample size re-estimation was performed during the Secondary Prevention of Small Subcortical Strokes study resulting in an increase from the planned sample size of 2500-3020, and we sought to determine the impact of the sample size re-estimation on the study results. We assessed the results of the primary efficacy and safety analyses with the full 3020 patients and compared them to the results that would have been observed had randomization ended with 2500 patients. The primary efficacy outcome considered was recurrent stroke, and the primary safety outcomes were major bleeds and death. We computed incidence rates for the efficacy and safety outcomes and used Cox proportional hazards models to examine the hazard ratios for each of the two treatment interventions (i.e. the antiplatelet and blood pressure interventions). In the antiplatelet intervention, the hazard ratio was not materially modified by increasing the sample size, nor did the conclusions regarding the efficacy of mono versus dual-therapy change: there was no difference in the effect of dual- versus monotherapy on the risk of recurrent stroke hazard ratios (n = 3020 HR (95% confidence interval): 0.92 (0.72, 1.2), p = 0.48; n = 2500 HR (95% confidence interval): 1.0 (0.78, 1.3), p = 0.85). With respect to the blood pressure intervention, increasing the sample size resulted in less certainty in the results, as the hazard ratio for higher versus lower systolic blood pressure target approached, but did not achieve, statistical significance with the larger sample (n = 3020 HR (95% confidence interval): 0.81 (0.63, 1.0), p = 0.089; n = 2500 HR (95% confidence interval): 0.89 (0.68, 1.17), p = 0.40). The results from the safety analyses were similar to 3020 and 2500 patients for both study interventions. Other trial-related factors, such as contracts, finances, and study management, were impacted as well. Adaptive designs can have benefits in randomized clinical trials, but do not always result in significant findings. The impact of adaptive designs should be measured in terms of both trial results, as well as practical issues related to trial management. More post hoc analyses of study adaptations will lead to better understanding of the balance between the benefits and the costs. © The Author(s) 2016.

  20. Relationships between media use, body fatness and physical activity in children and youth: a meta-analysis.

    PubMed

    Marshall, S J; Biddle, S J H; Gorely, T; Cameron, N; Murdey, I

    2004-10-01

    To review the empirical evidence of associations between television (TV) viewing, video/computer game use and (a) body fatness, and (b) physical activity. Meta-analysis. Published English-language studies were located from computerized literature searches, bibliographies of primary studies and narrative reviews, and manual searches of personal archives. Included studies presented at least one empirical association between TV viewing, video/computer game use and body fatness or physical activity among samples of children and youth aged 3-18 y. The mean sample-weighted corrected effect size (Pearson r). Based on data from 52 independent samples, the mean sample-weighted effect size between TV viewing and body fatness was 0.066 (95% CI=0.056-0.078; total N=44,707). The sample-weighted fully corrected effect size was 0.084. Based on data from six independent samples, the mean sample-weighted effect size between video/computer game use and body fatness was 0.070 (95% CI=-0.048 to 0.188; total N=1,722). The sample-weighted fully corrected effect size was 0.128. Based on data from 39 independent samples, the mean sample-weighted effect size between TV viewing and physical activity was -0.096 (95% CI=-0.080 to -0.112; total N=141,505). The sample-weighted fully corrected effect size was -0.129. Based on data from 10 independent samples, the mean sample-weighted effect size between video/computer game use and physical activity was -0.104 (95% CI=-0.080 to -0.128; total N=119,942). The sample-weighted fully corrected effect size was -0.141. A statistically significant relationship exists between TV viewing and body fatness among children and youth although it is likely to be too small to be of substantial clinical relevance. The relationship between TV viewing and physical activity is small but negative. The strength of these relationships remains virtually unchanged even after correcting for common sources of bias known to impact study outcomes. While the total amount of time per day engaged in sedentary behavior is inevitably prohibitive of physical activity, media-based inactivity may be unfairly implicated in recent epidemiologic trends of overweight and obesity among children and youth. Relationships between sedentary behavior and health are unlikely to be explained using single markers of inactivity, such as TV viewing or video/computer game use.

  1. Genome-wide meta-analyses of stratified depression in Generation Scotland and UK Biobank.

    PubMed

    Hall, Lynsey S; Adams, Mark J; Arnau-Soler, Aleix; Clarke, Toni-Kim; Howard, David M; Zeng, Yanni; Davies, Gail; Hagenaars, Saskia P; Maria Fernandez-Pujals, Ana; Gibson, Jude; Wigmore, Eleanor M; Boutin, Thibaud S; Hayward, Caroline; Scotland, Generation; Porteous, David J; Deary, Ian J; Thomson, Pippa A; Haley, Chris S; McIntosh, Andrew M

    2018-01-10

    Few replicable genetic associations for Major Depressive Disorder (MDD) have been identified. Recent studies of MDD have identified common risk variants by using a broader phenotype definition in very large samples, or by reducing phenotypic and ancestral heterogeneity. We sought to ascertain whether it is more informative to maximize the sample size using data from all available cases and controls, or to use a sex or recurrent stratified subset of affected individuals. To test this, we compared heritability estimates, genetic correlation with other traits, variance explained by MDD polygenic score, and variants identified by genome-wide meta-analysis for broad and narrow MDD classifications in two large British cohorts - Generation Scotland and UK Biobank. Genome-wide meta-analysis of MDD in males yielded one genome-wide significant locus on 3p22.3, with three genes in this region (CRTAP, GLB1, and TMPPE) demonstrating a significant association in gene-based tests. Meta-analyzed MDD, recurrent MDD and female MDD yielded equivalent heritability estimates, showed no detectable difference in association with polygenic scores, and were each genetically correlated with six health-correlated traits (neuroticism, depressive symptoms, subjective well-being, MDD, a cross-disorder phenotype and Bipolar Disorder). Whilst stratified GWAS analysis revealed a genome-wide significant locus for male MDD, the lack of independent replication, and the consistent pattern of results in other MDD classifications suggests that phenotypic stratification using recurrence or sex in currently available sample sizes is currently weakly justified. Based upon existing studies and our findings, the strategy of maximizing sample sizes is likely to provide the greater gain.

  2. ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL

    PubMed Central

    Huang, Jian; Sun, Tingni; Ying, Zhiliang; Yu, Yi; Zhang, Cun-Hui

    2013-01-01

    We study the absolute penalized maximum partial likelihood estimator in sparse, high-dimensional Cox proportional hazards regression models where the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true regression coefficients. Similar results based on an extension of the restricted eigenvalue can be also proved by our method. However, the presented oracle inequalities are sharper since the compatibility and cone invertibility factors are always greater than the corresponding restricted eigenvalue. In the Cox regression model, the Hessian matrix is based on time-dependent covariates in censored risk sets, so that the compatibility and cone invertibility factors, and the restricted eigenvalue as well, are random variables even when they are evaluated for the Hessian at the true regression coefficients. Under mild conditions, we prove that these quantities are bounded from below by positive constants for time-dependent covariates, including cases where the number of covariates is of greater order than the sample size. Consequently, the compatibility and cone invertibility factors can be treated as positive constants in our oracle inequalities. PMID:24086091

  3. ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL.

    PubMed

    Huang, Jian; Sun, Tingni; Ying, Zhiliang; Yu, Yi; Zhang, Cun-Hui

    2013-06-01

    We study the absolute penalized maximum partial likelihood estimator in sparse, high-dimensional Cox proportional hazards regression models where the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true regression coefficients. Similar results based on an extension of the restricted eigenvalue can be also proved by our method. However, the presented oracle inequalities are sharper since the compatibility and cone invertibility factors are always greater than the corresponding restricted eigenvalue. In the Cox regression model, the Hessian matrix is based on time-dependent covariates in censored risk sets, so that the compatibility and cone invertibility factors, and the restricted eigenvalue as well, are random variables even when they are evaluated for the Hessian at the true regression coefficients. Under mild conditions, we prove that these quantities are bounded from below by positive constants for time-dependent covariates, including cases where the number of covariates is of greater order than the sample size. Consequently, the compatibility and cone invertibility factors can be treated as positive constants in our oracle inequalities.

  4. Prediction of skull fracture risk for children 0-9 months old through validated parametric finite element model and cadaver test reconstruction.

    PubMed

    Li, Zhigang; Liu, Weiguo; Zhang, Jinhuan; Hu, Jingwen

    2015-09-01

    Skull fracture is one of the most common pediatric traumas. However, injury assessment tools for predicting pediatric skull fracture risk is not well established mainly due to the lack of cadaver tests. Weber conducted 50 pediatric cadaver drop tests for forensic research on child abuse in the mid-1980s (Experimental studies of skull fractures in infants, Z Rechtsmed. 92: 87-94, 1984; Biomechanical fragility of the infant skull, Z Rechtsmed. 94: 93-101, 1985). To our knowledge, these studies contained the largest sample size among pediatric cadaver tests in the literature. However, the lack of injury measurements limited their direct application in investigating pediatric skull fracture risks. In this study, 50 pediatric cadaver tests from Weber's studies were reconstructed using a parametric pediatric head finite element (FE) model which were morphed into subjects with ages, head sizes/shapes, and skull thickness values that reported in the tests. The skull fracture risk curves for infants from 0 to 9 months old were developed based on the model-predicted head injury measures through logistic regression analysis. It was found that the model-predicted stress responses in the skull (maximal von Mises stress, maximal shear stress, and maximal first principal stress) were better predictors than global kinematic-based injury measures (peak head acceleration and head injury criterion (HIC)) in predicting pediatric skull fracture. This study demonstrated the feasibility of using age- and size/shape-appropriate head FE models to predict pediatric head injuries. Such models can account for the morphological variations among the subjects, which cannot be considered by a single FE human model.

  5. Estimating the size of hidden populations using respondent-driven sampling data: Case examples from Morocco

    PubMed Central

    Johnston, Lisa G; McLaughlin, Katherine R; Rhilani, Houssine El; Latifi, Amina; Toufik, Abdalla; Bennani, Aziza; Alami, Kamal; Elomari, Boutaina; Handcock, Mark S

    2015-01-01

    Background Respondent-driven sampling is used worldwide to estimate the population prevalence of characteristics such as HIV/AIDS and associated risk factors in hard-to-reach populations. Estimating the total size of these populations is of great interest to national and international organizations, however reliable measures of population size often do not exist. Methods Successive Sampling-Population Size Estimation (SS-PSE) along with network size imputation allows population size estimates to be made without relying on separate studies or additional data (as in network scale-up, multiplier and capture-recapture methods), which may be biased. Results Ten population size estimates were calculated for people who inject drugs, female sex workers, men who have sex with other men, and migrants from sub-Sahara Africa in six different cities in Morocco. SS-PSE estimates fell within or very close to the likely values provided by experts and the estimates from previous studies using other methods. Conclusions SS-PSE is an effective method for estimating the size of hard-to-reach populations that leverages important information within respondent-driven sampling studies. The addition of a network size imputation method helps to smooth network sizes allowing for more accurate results. However, caution should be used particularly when there is reason to believe that clustered subgroups may exist within the population of interest or when the sample size is small in relation to the population. PMID:26258908

  6. Anomalies in the detection of change: When changes in sample size are mistaken for changes in proportions.

    PubMed

    Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy

    2016-01-01

    Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.

  7. Evaluating common de-identification heuristics for personal health information.

    PubMed

    El Emam, Khaled; Jabbouri, Sam; Sams, Scott; Drouet, Youenn; Power, Michael

    2006-11-21

    With the growing adoption of electronic medical records, there are increasing demands for the use of this electronic clinical data in observational research. A frequent ethics board requirement for such secondary use of personal health information in observational research is that the data be de-identified. De-identification heuristics are provided in the Health Insurance Portability and Accountability Act Privacy Rule, funding agency and professional association privacy guidelines, and common practice. The aim of the study was to evaluate whether the re-identification risks due to record linkage are sufficiently low when following common de-identification heuristics and whether the risk is stable across sample sizes and data sets. Two methods were followed to construct identification data sets. Re-identification attacks were simulated on these. For each data set we varied the sample size down to 30 individuals, and for each sample size evaluated the risk of re-identification for all combinations of quasi-identifiers. The combinations of quasi-identifiers that were low risk more than 50% of the time were considered stable. The identification data sets we were able to construct were the list of all physicians and the list of all lawyers registered in Ontario, using 1% sampling fractions. The quasi-identifiers of region, gender, and year of birth were found to be low risk more than 50% of the time across both data sets. The combination of gender and region was also found to be low risk more than 50% of the time. We were not able to create an identification data set for the whole population. Existing Canadian federal and provincial privacy laws help explain why it is difficult to create an identification data set for the whole population. That such examples of high re-identification risk exist for mainstream professions makes a strong case for not disclosing the high-risk variables and their combinations identified here. For professional subpopulations with published membership lists, many variables often needed by researchers would have to be excluded or generalized to ensure consistently low re-identification risk. Data custodians and researchers need to consider other statistical disclosure techniques for protecting privacy.

  8. Evaluating Common De-Identification Heuristics for Personal Health Information

    PubMed Central

    Jabbouri, Sam; Sams, Scott; Drouet, Youenn; Power, Michael

    2006-01-01

    Background With the growing adoption of electronic medical records, there are increasing demands for the use of this electronic clinical data in observational research. A frequent ethics board requirement for such secondary use of personal health information in observational research is that the data be de-identified. De-identification heuristics are provided in the Health Insurance Portability and Accountability Act Privacy Rule, funding agency and professional association privacy guidelines, and common practice. Objective The aim of the study was to evaluate whether the re-identification risks due to record linkage are sufficiently low when following common de-identification heuristics and whether the risk is stable across sample sizes and data sets. Methods Two methods were followed to construct identification data sets. Re-identification attacks were simulated on these. For each data set we varied the sample size down to 30 individuals, and for each sample size evaluated the risk of re-identification for all combinations of quasi-identifiers. The combinations of quasi-identifiers that were low risk more than 50% of the time were considered stable. Results The identification data sets we were able to construct were the list of all physicians and the list of all lawyers registered in Ontario, using 1% sampling fractions. The quasi-identifiers of region, gender, and year of birth were found to be low risk more than 50% of the time across both data sets. The combination of gender and region was also found to be low risk more than 50% of the time. We were not able to create an identification data set for the whole population. Conclusions Existing Canadian federal and provincial privacy laws help explain why it is difficult to create an identification data set for the whole population. That such examples of high re-identification risk exist for mainstream professions makes a strong case for not disclosing the high-risk variables and their combinations identified here. For professional subpopulations with published membership lists, many variables often needed by researchers would have to be excluded or generalized to ensure consistently low re-identification risk. Data custodians and researchers need to consider other statistical disclosure techniques for protecting privacy. PMID:17213047

  9. Seroprevalence and risk factors associated with bovine brucellosis in the Potohar Plateau, Pakistan.

    PubMed

    Ali, Shahzad; Akhter, Shamim; Neubauer, Heinrich; Melzer, Falk; Khan, Iahtasham; Abatih, Emmanuel Nji; El-Adawy, Hosny; Irfan, Muhammad; Muhammad, Ali; Akbar, Muhammad Waqas; Umar, Sajid; Ali, Qurban; Iqbal, Muhammad Naeem; Mahmood, Abid; Ahmed, Haroon

    2017-01-28

    The seroprevalence and risk factors of bovine brucellosis were studied at animal and herd level using a combination of culture, serological and molecular methods. The study was conducted in 253 randomly selected cattle herds of the Potohar plateau, Pakistan from which a total of 2709 serum (1462 cattle and 1247 buffaloes) and 2330 milk (1168 cattle and 1162 buffaloes) samples were collected. Data on risk factors associated with seroprevalence of brucellosis were collected through interviews using questionnaires. Univariable and multivariable random effects logistic regression models were used for identifying important risk factors at animal and herd levels. One hundred and seventy (6.3%) samples and 47 (18.6%) herds were seropositive for brucellosis by Rose Bengal Plate test. Variations in seroprevalence were observed across the different sampling sites. At animal level, sex, species and stock replacement were found to be potential risk factors for brucellosis. At herd level, herd size (≥9 animals) and insemination method used were important risk factors. The presence of Brucella DNA was confirmed with a real-time polymerase chain reaction assay (qRT-PCR) in 52.4% out of 170 serological positive samples. In total, 156 (6.7%) milk samples were positive by milk ring test. B. abortus biovar 1 was cultured from 5 positive milk samples. This study shows that the seroprevalence of bovine brucellosis is high in some regions in Pakistan. Prevalence was associated with herd size, abortion history, insemination methods used, age, sex and stock replacement methods. The infected animal may act as source of infection for other animals and for humans. The development of control strategies for bovine brucellosis through implementation of continuous surveillance and education programs in Pakistan is warranted.

  10. Big assumptions for small samples in crop insurance

    Treesearch

    Ashley Elaine Hungerford; Barry Goodwin

    2014-01-01

    The purpose of this paper is to investigate the effects of crop insurance premiums being determined by small samples of yields that are spatially correlated. If spatial autocorrelation and small sample size are not properly accounted for in premium ratings, the premium rates may inaccurately reflect the risk of a loss.

  11. Accounting for treatment by center interaction in sample size determinations and the use of surrogate outcomes in the pessary for the prevention of preterm birth trial: a simulation study.

    PubMed

    Willan, Andrew R

    2016-07-05

    The Pessary for the Prevention of Preterm Birth Study (PS3) is an international, multicenter, randomized clinical trial designed to examine the effectiveness of the Arabin pessary in preventing preterm birth in pregnant women with a short cervix. During the design of the study two methodological issues regarding power and sample size were raised. Since treatment in the Standard Arm will vary between centers, it is anticipated that so too will the probability of preterm birth in that arm. This will likely result in a treatment by center interaction, and the issue of how this will affect the sample size requirements was raised. The sample size requirements to examine the effect of the pessary on the baby's clinical outcome was prohibitively high, so the second issue is how best to examine the effect on clinical outcome. The approaches taken to address these issues are presented. Simulation and sensitivity analysis were used to address the sample size issue. The probability of preterm birth in the Standard Arm was assumed to vary between centers following a Beta distribution with a mean of 0.3 and a coefficient of variation of 0.3. To address the second issue a Bayesian decision model is proposed that combines the information regarding the between-treatment difference in the probability of preterm birth from PS3 with the data from the Multiple Courses of Antenatal Corticosteroids for Preterm Birth Study that relate preterm birth and perinatal mortality/morbidity. The approach provides a between-treatment comparison with respect to the probability of a bad clinical outcome. The performance of the approach was assessed using simulation and sensitivity analysis. Accounting for a possible treatment by center interaction increased the sample size from 540 to 700 patients per arm for the base case. The sample size requirements increase with the coefficient of variation and decrease with the number of centers. Under the same assumptions used for determining the sample size requirements, the simulated mean probability that pessary reduces the risk of perinatal mortality/morbidity is 0.98. The simulated mean decreased with coefficient of variation and increased with the number of clinical sites. Employing simulation and sensitivity analysis is a useful approach for determining sample size requirements while accounting for the additional uncertainty due to a treatment by center interaction. Using a surrogate outcome in conjunction with a Bayesian decision model is an efficient way to compare important clinical outcomes in a randomized clinical trial in situations where the direct approach requires a prohibitively high sample size.

  12. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  13. Validation of a Multimarker Model for Assessing Risk of Type 2 Diabetes from a Five-Year Prospective Study of 6784 Danish People (Inter99)

    PubMed Central

    Urdea, Mickey; Kolberg, Janice; Wilber, Judith; Gerwien, Robert; Moler, Edward; Rowe, Michael; Jorgensen, Paul; Hansen, Torben; Pedersen, Oluf; Jørgensen, Torben; Borch-Johnsen, Knut

    2009-01-01

    Background Improved identification of subjects at high risk for development of type 2 diabetes would allow preventive interventions to be targeted toward individuals most likely to benefit. In previous research, predictive biomarkers were identified and used to develop multivariate models to assess an individual's risk of developing diabetes. Here we describe the training and validation of the PreDx™ Diabetes Risk Score (DRS) model in a clinical laboratory setting using baseline serum samples from subjects in the Inter99 cohort, a population-based primary prevention study of cardiovascular disease. Methods Among 6784 subjects free of diabetes at baseline, 215 subjects progressed to diabetes (converters) during five years of follow-up. A nested case-control study was performed using serum samples from 202 converters and 597 randomly selected nonconverters. Samples were randomly assigned to equally sized training and validation sets. Seven biomarkers were measured using assays developed for use in a clinical reference laboratory. Results The PreDx DRS model performed better on the training set (area under the curve [AUC] = 0.837) than fasting plasma glucose alone (AUC = 0.779). When applied to the sequestered validation set, the PreDx DRS showed the same performance (AUC = 0.838), thus validating the model. This model had a better AUC than any other single measure from a fasting sample. Moreover, the model provided further risk stratification among high-risk subpopulations with impaired fasting glucose or metabolic syndrome. Conclusions The PreDx DRS provides the absolute risk of diabetes conversion in five years for subjects identified to be “at risk” using the clinical factors. PMID:20144324

  14. Rethinking the Clinically Based Thresholds of TransCelerate BioPharma for Risk-Based Monitoring.

    PubMed

    Zink, Richard C; Dmitrienko, Anastasia; Dmitrienko, Alex

    2018-01-01

    The quality of data from clinical trials has received a great deal of attention in recent years. Of central importance is the need to protect the well-being of study participants and maintain the integrity of final analysis results. However, traditional approaches to assess data quality have come under increased scrutiny as providing little benefit for the substantial cost. Numerous regulatory guidance documents and industry position papers have described risk-based approaches to identify quality and safety issues. In particular, the position paper of TransCelerate BioPharma recommends defining risk thresholds to assess safety and quality risks based on past clinical experience. This exercise can be extremely time-consuming, and the resulting thresholds may only be relevant to a particular therapeutic area, patient or clinical site population. In addition, predefined thresholds cannot account for safety or quality issues where the underlying rate of observing a particular problem may change over the course of a clinical trial, and often do not consider varying patient exposure. In this manuscript, we appropriate rules commonly utilized for funnel plots to define a traffic-light system for risk indicators based on statistical criteria that consider the duration of patient follow-up. Further, we describe how these methods can be adapted to assess changing risk over time. Finally, we illustrate numerous graphical approaches to summarize and communicate risk, and discuss hybrid clinical-statistical approaches to allow for the assessment of risk at sites with low patient enrollment. We illustrate the aforementioned methodologies for a clinical trial in patients with schizophrenia. Funnel plots are a flexible graphical technique that can form the basis for a risk-based strategy to assess data integrity, while considering site sample size, patient exposure, and changing risk across time.

  15. A new surveillance and response tool: risk map of infected Oncomelania hupensis detected by Loop-mediated isothermal amplification (LAMP) from pooled samples.

    PubMed

    Tong, Qun-Bo; Chen, Rui; Zhang, Yi; Yang, Guo-Jing; Kumagai, Takashi; Furushima-Shimogawara, Rieko; Lou, Di; Yang, Kun; Wen, Li-Yong; Lu, Shao-Hong; Ohta, Nobuo; Zhou, Xiao-Nong

    2015-01-01

    Although schistosomiasis remains a serious health problem worldwide, significant achievements in schistosomiasis control has been made in the People's Republic of China. The disease has been eliminated in five out of 12 endemic provinces, and the prevalence in remaining endemic areas is very low and is heading toward elimination. A rapid and sensitive method for monitoring the distribution of infected Oncomelania hupensis is urgently required. We applied a loop-mediated isothermal amplification (LAMP) assay targeting 28S rDNA for the rapid and effective detection of Schistosoma japonicum DNA in infected and prepatent infected O. hupensis snails. The detection limit of the LAMP method was 100 fg of S. japonicum genomic DNA. To promote the application of the approach in the field, the LAMP assay was used to detect infection in pooled samples of field-collected snails. In the pooled sample detection, snails were collected from 28 endemic areas, and 50 snails from each area were pooled based on the maximum pool size estimation, crushed together and DNA was extracted from each pooled sample as template for the LAMP assay. Based on the formula for detection from pooled samples, the proportion of positive pooled samples and the positive proportion of O. hupensis detected by LAMP of Xima village reached 66.67% and 1.33%, while those of Heini, Hongjia, Yangjiang and Huangshan villages were 33.33% and 0.67%, and those of Tuanzhou and Suliao villages were 16.67% and 0.33%, respectively. The remaining 21 monitoring field sites gave negative results. A risk map for the transmission of schistosomiasis was constructed using ArcMap, based on the positive proportion of O. hupensis infected with S. japonicum, as detected by the LAMP assay, which will form a guide for surveillance and response strategies in high risk areas. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Peer Crowd Identification and Adolescent Health Behaviors: Results From a Statewide Representative Study.

    PubMed

    Jordan, Jeffrey W; Stalgaitis, Carolyn A; Charles, John; Madden, Patrick A; Radhakrishnan, Anjana G; Saggese, Daniel

    2018-02-01

    Peer crowds are macro-level subcultures that share similarities across geographic areas. Over the past decade, dozens of studies have explored the association between adolescent peer crowds and risk behaviors, and how they can inform public health efforts. However, despite the interest, researchers have not yet reported on crowd size and risk levels from a representative sample, making it difficult for practitioners to apply peer crowd science to interventions. The current study reports findings from the first statewide representative sample of adolescent peer crowd identification and health behaviors. Weighted data were analyzed from the 2015 Virginia Youth Survey of Health Behaviors ( n = 4,367). Peer crowds were measured via the I-Base Survey™, a photo-based peer crowd survey instrument. Frequencies and confidence intervals of select behaviors including tobacco use, substance use, nutrition, physical activity, and violence were examined to identify high- and low-risk crowds. Logistic regression was used to calculate adjusted odds ratios for each crowd and behavior. Risky behaviors clustered in two peer crowds. Hip Hop crowd identification was associated with substance use, violence, and some depression and suicidal behaviors. Alternative crowd identification was associated with increased risk for some substance use behaviors, depression and suicide, bullying, physical inactivity, and obesity. Mainstream and, to a lesser extent, Popular, identities were associated with decreased risk for most behaviors. Findings from the first representative study of peer crowds and adolescent behavior identify two high-risk groups, providing critical insights for practitioners seeking to maximize public health interventions by targeting high-risk crowds.

  17. Predicting Risk of Type 2 Diabetes Mellitus with Genetic Risk Models on the Basis of Established Genome-wide Association Markers: A Systematic Review

    PubMed Central

    Bao, Wei; Hu, Frank B.; Rong, Shuang; Rong, Ying; Bowers, Katherine; Schisterman, Enrique F.; Liu, Liegang; Zhang, Cuilin

    2013-01-01

    This study aimed to evaluate the predictive performance of genetic risk models based on risk loci identified and/or confirmed in genome-wide association studies for type 2 diabetes mellitus. A systematic literature search was conducted in the PubMed/MEDLINE and EMBASE databases through April 13, 2012, and published data relevant to the prediction of type 2 diabetes based on genome-wide association marker–based risk models (GRMs) were included. Of the 1,234 potentially relevant articles, 21 articles representing 23 studies were eligible for inclusion. The median area under the receiver operating characteristic curve (AUC) among eligible studies was 0.60 (range, 0.55–0.68), which did not differ appreciably by study design, sample size, participants’ race/ethnicity, or the number of genetic markers included in the GRMs. In addition, the AUCs for type 2 diabetes did not improve appreciably with the addition of genetic markers into conventional risk factor–based models (median AUC, 0.79 (range, 0.63–0.91) vs. median AUC, 0.78 (range, 0.63–0.90), respectively). A limited number of included studies used reclassification measures and yielded inconsistent results. In conclusion, GRMs showed a low predictive performance for risk of type 2 diabetes, irrespective of study design, participants’ race/ethnicity, and the number of genetic markers included. Moreover, the addition of genome-wide association markers into conventional risk models produced little improvement in predictive performance. PMID:24008910

  18. Mapping risk of Nipah virus transmission across Asia and across Bangladesh.

    PubMed

    Peterson, A Townsend

    2015-03-01

    Nipah virus is a highly pathogenic but poorly known paramyxovirus from South and Southeast Asia. In spite of the risks that it poses to human health, the geography and ecology of its occurrence remain little understood-the virus is basically known from Bangladesh and peninsular Malaysia, and little in between. In this contribution, I use documented occurrences of the virus to develop ecological niche-based maps summarizing its likely broader occurrence-although rangewide maps could not be developed that had significant predictive abilities, reflecting minimal sample sizes available, maps within Bangladesh were quite successful in identifying areas in which the virus is predictably present and likely transmitted. © 2013 APJPH.

  19. The Cardiovascular Intervention Improvement Telemedicine Study (CITIES): Rationale for a Tailored Behavioral and Educational Pharmacist-Administered Intervention for Achieving Cardiovascular Disease Risk Reduction

    PubMed Central

    Zullig, Leah L.; Melnyk, S. Dee; Stechuchak, Karen M.; McCant, Felicia; Danus, Susanne; Oddone, Eugene; Bastian, Lori; Olsen, Maren; Edelman, David; Rakley, Susan; Morey, Miriam

    2014-01-01

    Abstract Background: Hypertension, hyperlipidemia, and diabetes are significant, but often preventable, contributors to cardiovascular disease (CVD) risk. Medication and behavioral nonadherence are significant barriers to successful hypertension, hyperlidemia, and diabetes management. Our objective was to describe the theoretical framework underlying a tailored behavioral and educational pharmacist-administered intervention for achieving CVD risk reduction. Materials and Methods: Adults with poorly controlled hypertension and/or hyperlipidemia were enrolled from three outpatient primary care clinics associated with the Durham Veterans Affairs Medical Center (Durham, NC). Participants were randomly assigned to receive a pharmacist-administered, tailored, 1-year telephone-based intervention or usual care. The goal of the study was to reduce the risk for CVD through a theory-driven intervention to increase medication adherence and improve health behaviors. Results: Enrollment began in November 2011 and is ongoing. The target sample size is 500 patients. Conclusions: The Cardiovascular Intervention Improvement Telemedicine Study (CITIES) intervention has been designed with a strong theoretical underpinning. The theoretical foundation and intervention are designed to encourage patients with multiple comorbidities and poorly controlled CVD risk factors to engage in home-based monitoring and tailored telephone-based interventions. Evidence suggests that clinical pharmacist-administered telephone-based interventions may be efficiently integrated into primary care for patients with poorly controlled CVD risk factors. PMID:24303930

  20. Trade-off between fertility and predation risk drives a geometric sequence in the pattern of group sizes in baboons.

    PubMed

    Dunbar, R I M; MacCarron, Padraig; Robertson, Cole

    2018-03-01

    Group-living offers both benefits (protection against predators, access to resources) and costs (increased ecological competition, the impact of group size on fertility). Here, we use cluster analysis to detect natural patternings in a comprehensive sample of baboon groups, and identify a geometric sequence with peaks at approximately 20, 40, 80 and 160. We suggest (i) that these form a set of demographic oscillators that set habitat-specific limits to group size and (ii) that the oscillator arises from a trade-off between female fertility and predation risk. © 2018 The Authors.

  1. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  2. Determinants of neonatal mortality in rural and urban Nigeria: Evidence from a population-based national survey.

    PubMed

    Adewuyi, Emmanuel O; Zhao, Yun

    2017-02-01

    Significant reduction in the global burden of neonatal mortality was achieved through the millennium development goals. In Nigeria, however, only a marginal reduction was realized. This study assesses the rural-urban differences in neonatal mortality rate (NMR) and the associated risk factors in Nigeria. The dataset from the 2013 Nigeria demographic and health survey (NDHS), disaggregated by rural-urban residence (n = 20 449 and 9935, respectively), was explored using univariate, bivariate, and multivariable analysis. Complex samples analysis was applied to adjust for the unequal selection probabilities due to the multi-stage cluster sampling method used in the 2013 NDHS. The adjusted relationship between the outcome and predictor variables was assessed on multi-level logistic regression analysis. NMR for rural and urban populations was 36 and 28 deaths per 1000 live births, respectively. Risk factors in urban residence were lack of electricity access (adjusted OR [AOR], 1.555; 95%CI: 1.089-2.220), small birth size (as a proxy for low birthweight; AOR, 3.048; 95%CI: 2.047-4.537), and male gender (AOR, 1.666; 95%CI: 1.215-2.284). Risk factors in rural residence were small birth size (a proxy for low birthweight; AOR, 2.118; 95%CI: 1.600-2.804), and birth interval <2 years (AOR, 2.149; 95%CI: 1.760-2.624). Cesarean delivery was a risk factor both in rural (AOR, 5.038; 95%CI: 2.617-9.700) and urban Nigeria (AOR, 2.632; 95%CI: 1.543-4.489). Determinants of neonatal mortality were different in rural and urban Nigeria, and rural neonates had greater risk of mortality than their urban counterparts. © 2016 Japan Pediatric Society.

  3. Recruitment and enrollment of African Americans into health promoting programs: the effects of health promoting programs on cardiovascular disease risk study.

    PubMed

    Okhomina, Victoria I; Seals, Samantha R; Marshall, Gailen D

    2018-04-03

    Randomized controlled trials (RCT) often employ multiple recruitment methods to attract participants, however, special care must be taken to be inclusive of under-represented populations. We examine how recruiting from an existing observational study affected the recruitment of African Americans into a RCT that included yoga-based interventions. In particular, we report the recruitment success of The Effects of Health Promoting Programs (HPP) on Cardiovascular Disease Risk (NCT02019953), the first yoga-based clinical trial to focus only on African Americans. To recruit participants, a multifaceted recruitment strategy was implemented exclusively in the Jackson Heart Study (JHS) cohort. The HPP recruited from the JHS cohort using direct mailings, signs and flyers placed around JHS study facilities, and through JHS annual follow-up interviews. Enrollment into HPP was open to all active JHS participants that were eligible to return for the third clinic exam (n = 4644). The target sample size was 375 JHS participants over a 24 month recruitment and enrollment period. From the active members of the JHS cohort, 503 were pre-screened for eligibility in HPP. More than 90% of those pre-screened were provisionally eligible for the study. The enrollment goal of 375 was completed after a 16-month enrollment period with over 25% (n = 97) of the required sample size enrolling during the second month of recruitment. The findings show that participants in observational studies can be successfully recruited into RCT. Observational studies provide researchers with a well-defined population that may be of interest when designing clinical trials. This is particularly useful in the recruitment of a high-risk, traditionally underrepresented populations for non-pharmacological clinical trials where traditional recruitment methods may prolong enrollment periods and extend study budgets.

  4. Loci influencing lipid levels and coronary heart disease risk in 16 European population cohorts

    PubMed Central

    Aulchenko, Yurii S; Ripatti, Samuli; Lindqvist, Ida; Boomsma, Dorret; Heid, Iris M; Pramstaller, Peter P; Penninx, Brenda W J H; Janssens, A Cecile J W; Wilson, James F; Spector, Tim; Martin, Nicholas G; Pedersen, Nancy L; Kyvik, Kirsten Ohm; Kaprio, Jaakko; Hofman, Albert; Freimer, Nelson B; Jarvelin, Marjo-Riitta; Gyllensten, Ulf; Campbell, Harry; Rudan, Igor; Johansson, Åsa; Marroni, Fabio; Hayward, Caroline; Vitart, Veronique; Jonasson, Inger; Pattaro, Cristian; Wright, Alan; Hastie, Nick; Pichler, Irene; Hicks, Andrew A; Falchi, Mario; Willemsen, Gonneke; Hottenga, Jouke-Jan; de Geus, Eco J C; Montgomery, Grant W; Whitfield, John; Magnusson, Patrik; Saharinen, Juha; Perola, Markus; Silander, Kaisa; Isaacs, Aaron; Sijbrands, Eric J G; Uitterlinden, Andre G; Witteman, Jacqueline C M; Oostra, Ben A; Elliott, Paul; Ruokonen, Aimo; Sabatti, Chiara; Gieger, Christian; Meitinger, Thomas; Kronenberg, Florian; Döring, Angela; Wichmann, H-Erich; Smit, Johannes H; McCarthy, Mark I; van Duijn, Cornelia M; Peltonen, Leena

    2009-01-01

    Recent genome-wide association (GWA) studies of lipids have been conducted in samples ascertained for other phenotypes, particularly diabetes. Here we report the first GWA analysis of loci affecting total cholesterol (TC), low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol and triglycerides sampled randomly from 16 population-based cohorts and genotyped using mainly the Illumina HumanHap300-Duo platform. Our study included a total of 17,797-22,562 persons, aged 18-104 years and from geographic regions spanning from the Nordic countries to Southern Europe. We established 22 loci associated with serum lipid levels at a genome-wide significance level (P < 5 × 10-8), including 16 loci that were identified by previous GWA studies. The six newly identified loci in our cohort samples are ABCG5 (TC, P = 1.5 × 10-11; LDL, P = 2.6 × 10-10), TMEM57 (TC, P = 5.4 × 10-10), CTCF-PRMT8 region (HDL, P = 8.3 × 10-16), DNAH11 (LDL, P = 6.1 × 10-9), FADS3-FADS2 (TC, P = 1.5 × 10-10; LDL, P = 4.4 × 10-13) and MADD-FOLH1 region (HDL, P = 6 × 10-11). For three loci, effect sizes differed significantly by sex. Genetic risk scores based on lipid loci explain up to 4.8% of variation in lipids and were also associated with increased intima media thickness (P = 0.001) and coronary heart disease incidence (P = 0.04). The genetic risk score improves the screening of high-risk groups of dyslipidemia over classical risk factors. PMID:19060911

  5. Sample size for post-marketing safety studies based on historical controls.

    PubMed

    Wu, Yu-te; Makuch, Robert W

    2010-08-01

    As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.

  6. Reducing Individual Variation for fMRI Studies in Children by Minimizing Template Related Errors

    PubMed Central

    Weng, Jian; Dong, Shanshan; He, Hongjian; Chen, Feiyan; Peng, Xiaogang

    2015-01-01

    Spatial normalization is an essential process for group comparisons in functional MRI studies. In practice, there is a risk of normalization errors particularly in studies involving children, seniors or diseased populations and in regions with high individual variation. One way to minimize normalization errors is to create a study-specific template based on a large sample size. However, studies with a large sample size are not always feasible, particularly for children studies. The performance of templates with a small sample size has not been evaluated in fMRI studies in children. In the current study, this issue was encountered in a working memory task with 29 children in two groups. We compared the performance of different templates: a study-specific template created by the experimental population, a Chinese children template and the widely used adult MNI template. We observed distinct differences in the right orbitofrontal region among the three templates in between-group comparisons. The study-specific template and the Chinese children template were more sensitive for the detection of between-group differences in the orbitofrontal cortex than the MNI template. Proper templates could effectively reduce individual variation. Further analysis revealed a correlation between the BOLD contrast size and the norm index of the affine transformation matrix, i.e., the SFN, which characterizes the difference between a template and a native image and differs significantly across subjects. Thereby, we proposed and tested another method to reduce individual variation that included the SFN as a covariate in group-wise statistics. This correction exhibits outstanding performance in enhancing detection power in group-level tests. A training effect of abacus-based mental calculation was also demonstrated, with significantly elevated activation in the right orbitofrontal region that correlated with behavioral response time across subjects in the trained group. PMID:26207985

  7. Measuring Endocrine-active Chemicals at ng/L Concentrations in Water

    EPA Science Inventory

    Analytical chemistry challenges for supporting aquatic toxicity research and risk assessment are many: need for low detection limits, complex sample matrices, small sample size, and equipment limitations to name a few. Certain types of potent endocrine disrupting chemicals (EDCs)...

  8. Measures of precision for dissimilarity-based multivariate analysis of ecological communities.

    PubMed

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. © 2014 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.

  9. A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models

    ERIC Educational Resources Information Center

    Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.

    2013-01-01

    Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…

  10. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  11. The Number of Patients and Events Required to Limit the Risk of Overestimation of Intervention Effects in Meta-Analysis—A Simulation Study

    PubMed Central

    Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana

    2011-01-01

    Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777

  12. Sample size considerations using mathematical models: an example with Chlamydia trachomatis infection and its sequelae pelvic inflammatory disease.

    PubMed

    Herzog, Sereina A; Low, Nicola; Berghold, Andrea

    2015-06-19

    The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.

  13. The value of customised centiles in assessing perinatal mortality risk associated with parity and maternal size.

    PubMed

    Gardosi, J; Clausson, B; Francis, A

    2009-09-01

    We wanted to compare customised and population standards for defining smallness for gestational age (SGA) in the assessment of perinatal mortality risk associated with parity and maternal size. Population-based cohort study. Sweden. Swedish Birth Registry database 1992-1995 with 354 205 complete records. Coefficients were derived and applied to determine SGA by the fully customised method, or by adjustment for fetal sex only, and using the same fetal weight standard. Perinatal deaths and rates of small for gestational age (SGA) babies within subgroups stratified by parity, body mass index (BMI) and maternal size within the BMI range of 20.0-24.9. Perinatal mortality rates (PMR) had a U-shaped distribution in parity groups, increased proportionately with maternal BMI, and had no association with maternal size within the normal BMI range. For each of these subgroups, SGA rates determined by the customised method showed strong association with the PMR. In contrast, SGA based on uncustomised, population-based centiles had poor correlation with perinatal mortality. The increased perinatal mortality risk in pregnancies of obese mothers was associated with an increased risk of SGA using customised centiles, and a decreased risk of SGA using population-based centiles. The use of customised centiles to determine SGA improves the identification of pregnancies which are at increased risk of perinatal death.

  14. Supervised Risk Predictor of Breast Cancer Based on Intrinsic Subtypes

    PubMed Central

    Parker, Joel S.; Mullins, Michael; Cheang, Maggie C.U.; Leung, Samuel; Voduc, David; Vickery, Tammi; Davies, Sherri; Fauron, Christiane; He, Xiaping; Hu, Zhiyuan; Quackenbush, John F.; Stijleman, Inge J.; Palazzo, Juan; Marron, J.S.; Nobel, Andrew B.; Mardis, Elaine; Nielsen, Torsten O.; Ellis, Matthew J.; Perou, Charles M.; Bernard, Philip S.

    2009-01-01

    Purpose To improve on current standards for breast cancer prognosis and prediction of chemotherapy benefit by developing a risk model that incorporates the gene expression–based “intrinsic” subtypes luminal A, luminal B, HER2-enriched, and basal-like. Methods A 50-gene subtype predictor was developed using microarray and quantitative reverse transcriptase polymerase chain reaction data from 189 prototype samples. Test sets from 761 patients (no systemic therapy) were evaluated for prognosis, and 133 patients were evaluated for prediction of pathologic complete response (pCR) to a taxane and anthracycline regimen. Results The intrinsic subtypes as discrete entities showed prognostic significance (P = 2.26E-12) and remained significant in multivariable analyses that incorporated standard parameters (estrogen receptor status, histologic grade, tumor size, and node status). A prognostic model for node-negative breast cancer was built using intrinsic subtype and clinical information. The C-index estimate for the combined model (subtype and tumor size) was a significant improvement on either the clinicopathologic model or subtype model alone. The intrinsic subtype model predicted neoadjuvant chemotherapy efficacy with a negative predictive value for pCR of 97%. Conclusion Diagnosis by intrinsic subtype adds significant prognostic and predictive information to standard parameters for patients with breast cancer. The prognostic properties of the continuous risk score will be of value for the management of node-negative breast cancers. The subtypes and risk score can also be used to assess the likelihood of efficacy from neoadjuvant chemotherapy. PMID:19204204

  15. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    PubMed

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  16. Blinded sample size re-estimation in three-arm trials with 'gold standard' design.

    PubMed

    Mütze, Tobias; Friede, Tim

    2017-10-15

    In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    PubMed

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  18. A note on sample size calculation for mean comparisons based on noncentral t-statistics.

    PubMed

    Chow, Shein-Chung; Shao, Jun; Wang, Hansheng

    2002-11-01

    One-sample and two-sample t-tests are commonly used in analyzing data from clinical trials in comparing mean responses from two drug products. During the planning stage of a clinical study, a crucial step is the sample size calculation, i.e., the determination of the number of subjects (patients) needed to achieve a desired power (e.g., 80%) for detecting a clinically meaningful difference in the mean drug responses. Based on noncentral t-distributions, we derive some sample size calculation formulas for testing equality, testing therapeutic noninferiority/superiority, and testing therapeutic equivalence, under the popular one-sample design, two-sample parallel design, and two-sample crossover design. Useful tables are constructed and some examples are given for illustration.

  19. [Comparative quality measurements part 3: funnel plots].

    PubMed

    Kottner, Jan; Lahmann, Nils

    2014-02-01

    Comparative quality measurements between organisations or institutions are common. Quality measures need to be standardised and risk adjusted. Random error must also be taken adequately into account. Rankings without consideration of the precision lead to flawed interpretations and enhances "gaming". Application of confidence intervals is one possibility to take chance variation into account. Funnel plots are modified control charts based on Statistical Process Control (SPC) theory. The quality measures are plotted against their sample size. Warning and control limits that are 2 or 3 standard deviations from the center line are added. With increasing group size the precision increases and so the control limits are forming a funnel. Data points within the control limits are considered to show common cause variation; data points outside special cause variation without the focus of spurious rankings. Funnel plots offer data based information about how to evaluate institutional performance within quality management contexts.

  20. Can an Internet-based health risk assessment highlight problems of heart disease risk factor awareness? A cross-sectional analysis.

    PubMed

    Dickerson, Justin B; McNeal, Catherine J; Tsai, Ginger; Rivera, Cathleen M; Smith, Matthew Lee; Ohsfeldt, Robert L; Ory, Marcia G

    2014-04-18

    Health risk assessments are becoming more popular as a tool to conveniently and effectively reach community-dwelling adults who may be at risk for serious chronic conditions such as coronary heart disease (CHD). The use of such instruments to improve adults' risk factor awareness and concordance with clinically measured risk factor values could be an opportunity to advance public health knowledge and build effective interventions. The objective of this study was to determine if an Internet-based health risk assessment can highlight important aspects of agreement between respondents' self-reported and clinically measured CHD risk factors for community-dwelling adults who may be at risk for CHD. Data from an Internet-based cardiovascular health risk assessment (Heart Aware) administered to community-dwelling adults at 127 clinical sites were analyzed. Respondents were recruited through individual hospital marketing campaigns, such as media advertising and print media, found throughout inpatient and outpatient facilities. CHD risk factors from the Framingham Heart Study were examined. Weighted kappa statistics were calculated to measure interrater agreement between respondents' self-reported and clinically measured CHD risk factors. Weighted kappa statistics were then calculated for each sample by strata of overall 10-year CHD risk. Three samples were drawn based on strategies for treating missing data: a listwise deleted sample, a pairwise deleted sample, and a multiple imputation (MI) sample. The MI sample (n=16,879) was most appropriate for addressing missing data. No CHD risk factor had better than marginal interrater agreement (κ>.60). High-density lipoprotein cholesterol (HDL-C) exhibited suboptimal interrater agreement that deteriorated (eg, κ<.30) as overall CHD risk increased. Conversely, low-density lipoprotein cholesterol (LDL-C) interrater agreement improved (eg, up to κ=.25) as overall CHD risk increased. Overall CHD risk of the sample was lower than comparative population-based CHD risk (ie, no more than 15% risk of CHD for the sample vs up to a 30% chance of CHD for the population). Interventions are needed to improve knowledge of CHD risk factors. Specific interventions should address perceptions of HDL-C and LCL-C. Internet-based health risk assessments such as Heart Aware may contribute to public health surveillance, but they must address selection bias of Internet-based recruitment methods.

  1. Hypotheses and fundamental study design characteristics for evaluating potential reduced-risk tobacco products. Part I: Heuristic.

    PubMed

    Murrelle, Lenn; Coggins, Christopher R E; Gennings, Chris; Carchman, Richard A; Carter, Walter H; Davies, Bruce D; Krauss, Marc R; Lee, Peter N; Schleef, Raymond R; Zedler, Barbara K; Heidbreder, Christian

    2010-06-01

    The risk-reducing effect of a potential reduced-risk tobacco product (PRRP) can be investigated conceptually in a long-term, prospective study of disease risks among cigarette smokers who switch to a PRRP and in appropriate comparison groups. Our objective was to provide guidance for establishing the fundamental design characteristics of a study intended to (1) determine if switching to a PRRP reduces the risk of lung cancer (LC) compared with continued cigarette smoking, and (2) compare, using a non-inferiority approach, the reduction in LC risk among smokers who switched to a PRRP to the reduction in risk among smokers who quit smoking entirely. Using standard statistical methods applied to published data on LC incidence after smoking cessation, we show that the sample size and duration required for a study designed to evaluate the potential for LC risk reduction for an already marketed PRRP, compared with continued smoking, varies depending on the LC risk-reducing effectiveness of the PRRP, from a 5-year study with 8000-30,000 subjects to a 15-year study with <5000 to 10,000 subjects. To assess non-inferiority to quitting, the required sample size tends to be about 10 times greater, again depending on the effectiveness of the PRRP. (c) 2009 Elsevier Inc. All rights reserved.

  2. An Analytic Solution to the Computation of Power and Sample Size for Genetic Association Studies under a Pleiotropic Mode of Inheritance.

    PubMed

    Gordon, Derek; Londono, Douglas; Patel, Payal; Kim, Wonkuk; Finch, Stephen J; Heiman, Gary A

    2016-01-01

    Our motivation here is to calculate the power of 3 statistical tests used when there are genetic traits that operate under a pleiotropic mode of inheritance and when qualitative phenotypes are defined by use of thresholds for the multiple quantitative phenotypes. Specifically, we formulate a multivariate function that provides the probability that an individual has a vector of specific quantitative trait values conditional on having a risk locus genotype, and we apply thresholds to define qualitative phenotypes (affected, unaffected) and compute penetrances and conditional genotype frequencies based on the multivariate function. We extend the analytic power and minimum-sample-size-necessary (MSSN) formulas for 2 categorical data-based tests (genotype, linear trend test [LTT]) of genetic association to the pleiotropic model. We further compare the MSSN of the genotype test and the LTT with that of a multivariate ANOVA (Pillai). We approximate the MSSN for statistics by linear models using a factorial design and ANOVA. With ANOVA decomposition, we determine which factors most significantly change the power/MSSN for all statistics. Finally, we determine which test statistics have the smallest MSSN. In this work, MSSN calculations are for 2 traits (bivariate distributions) only (for illustrative purposes). We note that the calculations may be extended to address any number of traits. Our key findings are that the genotype test usually has lower MSSN requirements than the LTT. More inclusive thresholds (top/bottom 25% vs. top/bottom 10%) have higher sample size requirements. The Pillai test has a much larger MSSN than both the genotype test and the LTT, as a result of sample selection. With these formulas, researchers can specify how many subjects they must collect to localize genes for pleiotropic phenotypes. © 2017 S. Karger AG, Basel.

  3. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  4. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  5. Nonantibiotic prophylaxis for recurrent urinary tract infections: a systematic review and meta-analysis of randomized controlled trials.

    PubMed

    Beerepoot, M A J; Geerlings, S E; van Haarst, E P; van Charante, N Mensing; ter Riet, G

    2013-12-01

    Increasing antimicrobial resistance has stimulated interest in nonantibiotic prophylaxis of recurrent urinary tract infections. We assessed the effectiveness, tolerability and safety of nonantibiotic prophylaxis in adults with recurrent urinary tract infections. MEDLINE®, EMBASE™, the Cochrane Library and reference lists of relevant reviews were searched to April 2013 for relevant English language citations. Two reviewers selected randomized controlled trials that met the predefined criteria for population, interventions and outcomes. The difference in the proportions of patients with at least 1 urinary tract infection was calculated for individual studies, and pooled risk ratios were calculated using random and fixed effects models. Adverse event rates were also extracted. The Jadad score was used to assess risk of bias (0 to 2-high risk and 3 to 5-low risk). We identified 5,413 records and included 17 studies with data for 2,165 patients. The oral immunostimulant OM-89 decreased the rate of urinary tract infection recurrence (4 trials, sample size 891, median Jadad score 3, RR 0.61, 95% CI 0.48-0.78) and had a good safety profile. The vaginal vaccine Urovac® slightly reduced urinary tract infection recurrence (3 trials, sample size 220, Jadad score 3, RR 0.81, 95% CI 0.68-0.96) and primary immunization followed by booster immunization increased the time to reinfection. Vaginal estrogens showed a trend toward preventing urinary tract infection recurrence (2 trials, sample size 201, Jadad score 2.5, RR 0.42, 95% CI 0.16-1.10) but vaginal irritation occurred in 6% to 20% of women. Cranberries decreased urinary tract infection recurrence (2 trials, sample size 250, Jadad score 4, RR 0.53, 95% CI 0.33-0.83) as did acupuncture (2 open label trials, sample size 165, Jadad score 2, RR 0.48, 95% CI 0.29-0.79). Oral estrogens and lactobacilli prophylaxis did not decrease the rate of urinary tract infection recurrence. The evidence of the effectiveness of the oral immunostimulant OM-89 is promising. Although sometimes statistically significant, pooled findings for the other interventions should be considered tentative until corroborated by more research. Large head-to-head trials should be performed to optimally inform clinical decision making. Copyright © 2013 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  6. Confidence intervals for the population mean tailored to small sample sizes, with applications to survey sampling.

    PubMed

    Rosenblum, Michael A; Laan, Mark J van der

    2009-01-07

    The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).

  7. The effect of machine learning regression algorithms and sample size on individualized behavioral prediction with functional connectivity features.

    PubMed

    Cui, Zaixu; Gong, Gaolang

    2018-06-02

    Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Mineralogical, chemical and toxicological characterization of urban air particles.

    PubMed

    Čupr, Pavel; Flegrová, Zuzana; Franců, Juraj; Landlová, Linda; Klánová, Jana

    2013-04-01

    Systematic characterization of morphological, mineralogical, chemical and toxicological properties of various size fractions of the atmospheric particulate matter was a main focus of this study together with an assessment of the human health risks they pose. Even though near-ground atmospheric aerosols have been a subject of intensive research in recent years, data integrating chemical composition of particles and health risks are still scarce and the particle size aspect has not been properly addressed yet. Filling this gap, however, is necessary for reliable risk assessment. A high volume ambient air sampler equipped with a multi-stage cascade impactor was used for size specific particle collection, and all 6 fractions were a subject of detailed characterization of chemical (PAHs) and mineralogical composition of the particles, their mass size distribution and genotoxic potential of organic extracts. Finally, the risk level for inhalation exposure associated to the carcinogenic character of the studied PAHs has been assessed. The finest fraction (<0.45 μm) exhibited the highest mass, highest active surface, highest amount of associated PAHs and also highest direct and indirect genotoxic potentials in our model air sample. Risk assessment of inhalation scenario indicates the significant cancer risk values in PM 1.5 size fraction. This presented new approach proved to be a useful tool for human health risk assessment in the areas with significant levels of air dust concentration. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Evidence based management of polyps of the gall bladder: A systematic review of the risk factors of malignancy.

    PubMed

    Bhatt, Nikita R; Gillis, Amy; Smoothey, Craig O; Awan, Faisal N; Ridgway, Paul F

    2016-10-01

    There are no evidence-based guidelines to dictate when Gallbladder Polyps (GBPs) of varying sizes should be resected. To identify factors that accurately predict malignant disease in GBP; to provide an evidence-based algorithm for management. A systematic review following PRISMA guidelines was performed using terms "gallbladder polyps" AND "polypoid lesion of gallbladder", from January 1993 and September 2013. Inclusion criteria required histopathological report or follow-up of 2 years. RTI-IB tool was used for quality analysis. Correlation with GBP size and malignant potential was analysed using Euclidean distance; a logistics mixed effects model was used for assessing independent risk factors for malignancy. Fifty-three articles were included in review. Data from 21 studies was pooled for analysis. Optimum size cut-off for resection of GBPs was 10 mm. Probability of malignancy is approximately zero at size <4.15 mm. Patient age >50 years, sessile and single polyps were independent risk factors for malignancy. For polyps sized 4 mm-10 mm, a risk assessment model was formulated. This review and analysis has provided an evidence-based algorithm for the management of GBPs. Longitudinal studies are needed to better understand the behaviour of polyps <10 mm, that are not at a high risk of malignancy, but may change over time. Copyright © 2016 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.

  10. Factors associated with the likelihood of Giardia spp. and Cryptosporidium spp. in soil from dairy farms.

    PubMed

    Barwick, R S; Mohammed, H O; White, M E; Bryant, R B

    2003-03-01

    A study was conducted to identify factors associated with the likelihood of detecting Giardia spp. and Cryptosporidium spp. in the soil of dairy farms in a watershed area. A total of 37 farms were visited, and 782 soil samples were collected from targeted areas on these farms. The samples were analyzed for the presence of Cryptosporidium spp. oocysts, Giardia spp. cysts, percent moisture content, and pH. Logistic regression analysis was used to identify risk factors associated with the likelihood of the presence of these organisms. The use of the land at the sampling site was associated with the likelihood of environmental contamination with Cryptosporidium spp. Barn cleaner equipment area and agricultural fields were associated with increased likelihood of environmental contamination with Cryptosporidium spp. The risk of environmental contamination decreased with the pH of the soil and with the score of the potential likelihood of Cryptosporidium spp. The size of the sampling site, as determined by the sampling design, in square feet, was associated nonlinearly with the risk of detecting Cryptosporidium spp. The likelihood of the Giardia cyst in the soil increased with the prevalence of Giardia spp. in animals (i.e., 18 to 39%). As the size of the farm increased, there was decreased risk of Giardia spp. in the soil, and sampling sites which were covered with brush or bare soil showed a decrease in likelihood of detecting Giardia spp. when compared to land which had managed grass. The number of cattle on the farm less than 6 mo of age was negatively associated with the risk of detecting Giardia spp. in the soil, and the percent moisture content was positively associated with the risk of detecting Giardia spp. Our study showed that these two protozoan exist in dairy farm soil at different rates, and this risk could be modified by manipulating the pH of the soil.

  11. Mindfulness Meditation for Substance Use Disorders: A Systematic Review

    PubMed Central

    Zgierska, Aleksandra; Rabago, David; Chawla, Neharika; Kushner, Kenneth; Koehler, Robert; Marlatt, Allan

    2009-01-01

    Relapse is common in substance use disorders (SUDs), even among treated individuals. The goal of this article was to systematically review the existing evidence on mindfulness meditation-based interventions (MM) for SUDs. The comprehensive search for and review of literature found over 2,000 abstracts and resulted in 25 eligible manuscripts (22 published, 3 unpublished: 8 RCTs, 7 controlled non-randomized, 6 non-controlled prospective, 2 qualitative studies, 1 case report). When appropriate, methodological quality, absolute risk reduction, number needed to treat, and effect size (ES) were assessed. Overall, although preliminary evidence suggests MM efficacy and safety, conclusive data for MM as a treatment of SUDs are lacking. Significant methodological limitations exist in most studies. Further, it is unclear which persons with SUDs might benefit most from MM. Future trials must be of sufficient sample size to answer a specific clinical question and should target both assessment of effect size and mechanisms of action. PMID:19904664

  12. A survey sampling approach for pesticide monitoring of community water systems using groundwater as a drinking water source.

    PubMed

    Whitmore, Roy W; Chen, Wenlin

    2013-12-04

    The ability to infer human exposure to substances from drinking water using monitoring data helps determine and/or refine potential risks associated with drinking water consumption. We describe a survey sampling approach and its application to an atrazine groundwater monitoring study to adequately characterize upper exposure centiles and associated confidence intervals with predetermined precision. Study design and data analysis included sampling frame definition, sample stratification, sample size determination, allocation to strata, analysis weights, and weighted population estimates. Sampling frame encompassed 15 840 groundwater community water systems (CWS) in 21 states throughout the U. S. Median, and 95th percentile atrazine concentrations were 0.0022 and 0.024 ppb, respectively, for all CWS. Statistical estimates agreed with historical monitoring results, suggesting that the study design was adequate and robust. This methodology makes no assumptions regarding the occurrence distribution (e.g., lognormality); thus analyses based on the design-induced distribution provide the most robust basis for making inferences from the sample to target population.

  13. Impact of minimum catch size on the population viability of Strombus gigas (Mesogastropoda: Strombidae) in Quintana Roo, Mexico.

    PubMed

    Peel, Joanne R; Mandujano, María del Carmen

    2014-12-01

    The queen conch Strombus gigas represents one of the most important fishery resources of the Caribbean but heavy fishing pressure has led to the depletion of stocks throughout the region, causing the inclusion of this species into CITES Appendix II and IUCN's Red-List. In Mexico, the queen conch is managed through a minimum fishing size of 200 mm shell length and a fishing quota which usually represents 50% of the adult biomass. The objectives of this study were to determine the intrinsic population growth rate of the queen conch population of Xel-Ha, Quintana Roo, Mexico, and to assess the effects of a regulated fishing impact, simulating the extraction of 50% adult biomass on the population density. We used three different minimum size criteria to demonstrate the effects of minimum catch size on the population density and discuss biological implications. Demographic data was obtained through capture-mark-recapture sampling, collecting all animals encountered during three hours, by three divers, at four different sampling sites of the Xel-Ha inlet. The conch population was sampled each month between 2005 and 2006, and bimonthly between 2006 and 2011, tagging a total of 8,292 animals. Shell length and lip thickness were determined for each individual. The average shell length for conch with formed lip in Xel-Ha was 209.39 ± 14.18 mm and the median 210 mm. Half of the sampled conch with lip ranged between 200 mm and 219 mm shell length. Assuming that the presence of the lip is an indicator for sexual maturity, it can be concluded that many animals may form their lip at greater shell lengths than 200 mm and ought to be considered immature. Estimation of relative adult abundance and densities varied greatly depending on the criteria employed for adult classification. When using a minimum fishing size of 200 mm shell length, between 26.2% and up to 54.8% of the population qualified as adults, which represented a simulated fishing impact of almost one third of the population. When conch extraction was simulated using a classification criteria based on lip thickness, it had a much smaller impact on the population density. We concluded that the best management strategy for S. gigas is a minimum fishing size based on a lip thickness, since it has lower impact on the population density, and given that selective fishing pressure based on size may lead to the appearance of small adult individuals with reduced fecundity. Furthermore, based on the reproductive biology and the results of the simulated fishing, we suggest a minimum lip thickness of ≥ 15 mm, which ensures the protection of reproductive stages, reduces the risk of overfishing, leading to non-viable density reduction.

  14. Heavy metal concentrations in particle size fractions from street dust of Murcia (Spain) as the basis for risk assessment.

    PubMed

    Acosta, Jose A; Faz, Ángel; Kalbitz, Karsten; Jansen, Boris; Martínez-Martínez, Silvia

    2011-11-01

    Street dust has been sampled from six different types of land use of the city of Murcia (Spain). The samples were fractionated into eleven particle size fractions (<2, 2-10, 10-20, 20-50, 50-75, 75-106, 106-150, 150-180, 180-425, 425-850 μm and 850-2000 μm) and analyzed for Pb, Cu, Zn and Cd. The concentrations of these four potentially toxic metals were assessed, as well as the effect of particle size on their distribution. A severe enrichment of all metals was observed for all land-uses (industrial, suburban, urban and highways), with the concentration of all metals affected by the type of land-use. Coarse and fine particles in all cases showed concentrations of metals higher than those found in undisturbed areas. However, the results indicated a preferential partitioning of metals in fine particle size fractions in all cases, following a logarithmic distribution. The accumulation in the fine fractions was higher when the metals had an anthropogenic origin. The strong overrepresentation of metals in particles <10 μm indicates that if the finest fractions are removed by a vacuum-assisted dry sweeper or a regenerative-air sweeper the risk of metal dispersion and its consequent risk for humans will be highly reduced. Therefore, we recommend that risk assessment programs include monitoring of metal concentrations in dust where each land-use is separately evaluated. The finest particle fractions should be examined explicitly in order to apply the most efficient measures for reducing the risk of inhalation and ingestion of dust for humans and risk for the environment.

  15. Sample size and power calculations for detecting changes in malaria transmission using antibody seroconversion rate.

    PubMed

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris

    2015-12-30

    Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.

  16. Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology

    PubMed Central

    Vavrek, Matthew J.

    2015-01-01

    Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770

  17. Genome-wide association study meta-analysis of European and Asian-ancestry samples identifies three novel loci associated with bipolar disorder.

    PubMed

    Chen, D T; Jiang, X; Akula, N; Shugart, Y Y; Wendland, J R; Steele, C J M; Kassem, L; Park, J-H; Chatterjee, N; Jamain, S; Cheng, A; Leboyer, M; Muglia, P; Schulze, T G; Cichon, S; Nöthen, M M; Rietschel, M; McMahon, F J; Farmer, A; McGuffin, P; Craig, I; Lewis, C; Hosang, G; Cohen-Woods, S; Vincent, J B; Kennedy, J L; Strauss, J

    2013-02-01

    Meta-analyses of bipolar disorder (BD) genome-wide association studies (GWAS) have identified several genome-wide significant signals in European-ancestry samples, but so far account for little of the inherited risk. We performed a meta-analysis of ∼750,000 high-quality genetic markers on a combined sample of ∼14,000 subjects of European and Asian-ancestry (phase I). The most significant findings were further tested in an extended sample of ∼17,700 cases and controls (phase II). The results suggest novel association findings near the genes TRANK1 (LBA1), LMAN2L and PTGFR. In phase I, the most significant single nucleotide polymorphism (SNP), rs9834970 near TRANK1, was significant at the P=2.4 × 10(-11) level, with no heterogeneity. Supportive evidence for prior association findings near ANK3 and a locus on chromosome 3p21.1 was also observed. The phase II results were similar, although the heterogeneity test became significant for several SNPs. On the basis of these results and other established risk loci, we used the method developed by Park et al. to estimate the number, and the effect size distribution, of BD risk loci that could still be found by GWAS methods. We estimate that >63,000 case-control samples would be needed to identify the ∼105 BD risk loci discoverable by GWAS, and that these will together explain <6% of the inherited risk. These results support previous GWAS findings and identify three new candidate genes for BD. Further studies are needed to replicate these findings and may potentially lead to identification of functional variants. Sample size will remain a limiting factor in the discovery of common alleles associated with BD.

  18. Can Genetic Analysis of Putative Blood Alzheimer’s Disease Biomarkers Lead to Identification of Susceptibility Loci?

    PubMed Central

    Huebinger, Ryan M.; Shewale, Shantanu J.; Koenig, Jessica L.; Mitchel, Jeffrey S.; O’Bryant, Sid E.; Waring, Stephen C.; Diaz-Arrastia, Ramon; Chasse, Scott

    2015-01-01

    Although 24 Alzheimer’s disease (AD) risk loci have been reliably identified, a large portion of the predicted heritability for AD remains unexplained. It is expected that additional loci of small effect will be identified with an increased sample size. However, the cost of a significant increase in Case-Control sample size is prohibitive. The current study tests whether exploring the genetic basis of endophenotypes, in this case based on putative blood biomarkers for AD, can accelerate the identification of susceptibility loci using modest sample sizes. Each endophenotype was used as the outcome variable in an independent GWAS. Endophenotypes were based on circulating concentrations of proteins that contributed significantly to a published blood-based predictive algorithm for AD. Endophenotypes included Monocyte Chemoattractant Protein 1 (MCP1), Vascular Cell Adhesion Molecule 1 (VCAM1), Pancreatic Polypeptide (PP), Beta2 Microglobulin (B2M), Factor VII (F7), Adiponectin (ADN) and Tenascin C (TN-C). Across the seven endophenotypes, 47 SNPs were associated with outcome with a p-value ≤1x10-7. Each signal was further characterized with respect to known genetic loci associated with AD. Signals for several endophenotypes were observed in the vicinity of CR1, MS4A6A/MS4A4E, PICALM, CLU, and PTK2B. The strongest signal was observed in association with Factor VII levels and was located within the F7 gene. Additional signals were observed in MAP3K13, ZNF320, ATP9B and TREM1. Conditional regression analyses suggested that the SNPs contributed to variation in protein concentration independent of AD status. The identification of two putatively novel AD loci (in the Factor VII and ATP9B genes), which have not been located in previous studies despite massive sample sizes, highlights the benefits of an endophenotypic approach for resolving the genetic basis for complex diseases. The coincidence of several of the endophenotypic signals with known AD loci may point to novel genetic interactions and should be further investigated. PMID:26625115

  19. Can Genetic Analysis of Putative Blood Alzheimer's Disease Biomarkers Lead to Identification of Susceptibility Loci?

    PubMed

    Barber, Robert C; Phillips, Nicole R; Tilson, Jeffrey L; Huebinger, Ryan M; Shewale, Shantanu J; Koenig, Jessica L; Mitchel, Jeffrey S; O'Bryant, Sid E; Waring, Stephen C; Diaz-Arrastia, Ramon; Chasse, Scott; Wilhelmsen, Kirk C

    2015-01-01

    Although 24 Alzheimer's disease (AD) risk loci have been reliably identified, a large portion of the predicted heritability for AD remains unexplained. It is expected that additional loci of small effect will be identified with an increased sample size. However, the cost of a significant increase in Case-Control sample size is prohibitive. The current study tests whether exploring the genetic basis of endophenotypes, in this case based on putative blood biomarkers for AD, can accelerate the identification of susceptibility loci using modest sample sizes. Each endophenotype was used as the outcome variable in an independent GWAS. Endophenotypes were based on circulating concentrations of proteins that contributed significantly to a published blood-based predictive algorithm for AD. Endophenotypes included Monocyte Chemoattractant Protein 1 (MCP1), Vascular Cell Adhesion Molecule 1 (VCAM1), Pancreatic Polypeptide (PP), Beta2 Microglobulin (B2M), Factor VII (F7), Adiponectin (ADN) and Tenascin C (TN-C). Across the seven endophenotypes, 47 SNPs were associated with outcome with a p-value ≤1x10(-7). Each signal was further characterized with respect to known genetic loci associated with AD. Signals for several endophenotypes were observed in the vicinity of CR1, MS4A6A/MS4A4E, PICALM, CLU, and PTK2B. The strongest signal was observed in association with Factor VII levels and was located within the F7 gene. Additional signals were observed in MAP3K13, ZNF320, ATP9B and TREM1. Conditional regression analyses suggested that the SNPs contributed to variation in protein concentration independent of AD status. The identification of two putatively novel AD loci (in the Factor VII and ATP9B genes), which have not been located in previous studies despite massive sample sizes, highlights the benefits of an endophenotypic approach for resolving the genetic basis for complex diseases. The coincidence of several of the endophenotypic signals with known AD loci may point to novel genetic interactions and should be further investigated.

  20. Testing a level of response to alcohol-based model of heavy drinking and alcohol problems in 1,905 17-year-olds.

    PubMed

    Schuckit, Marc A; Smith, Tom L; Heron, Jon; Hickman, Matthew; Macleod, John; Lewis, Glyn; Davis, John M; Hibbeln, Joseph R; Brown, Sandra; Zuccolo, Luisa; Miller, Laura L; Davey-Smith, George

    2011-10-01

    The low level of response (LR) to alcohol is one of several genetically influenced characteristics that increase the risk for heavy drinking and alcohol problems. Efforts to understand how LR operates through additional life influences have been carried out primarily in modest-sized U.S.-based samples with limited statistical power, raising questions about generalizability and about the importance of components with smaller effects. This study evaluates a full LR-based model of risk in a large sample of adolescents from the United Kingdom. Cross-sectional structural equation models were used for the approximate first half of the age 17 subjects assessed by the Avon Longitudinal Study of Parents and Children, generating data on 1,905 adolescents (mean age 17.8 years, 44.2% boys). LR was measured with the Self-Rating of the Effects of Alcohol Questionnaire, outcomes were based on drinking quantities and problems, and standardized questionnaires were used to evaluate peer substance use, alcohol expectancies, and using alcohol to cope with stress. In this young and large U.K. sample, a low LR related to more adverse alcohol outcomes both directly and through partial mediation by all 3 additional key variables (peer substance use, expectancies, and coping). The models were similar in boys and girls. These results confirm key elements of the hypothesized LR-based model in a large U.K. sample, supporting some generalizability beyond U.S. groups. They also indicate that with enough statistical power, multiple elements contribute to how LR relates to alcohol outcomes and reinforce the applicability of the model to both genders. Copyright © 2011 by the Research Society on Alcoholism.

  1. The Consideration of Future Consequences and Health Behaviour: A Meta-Analysis.

    PubMed

    Murphy, Lisa; Dockray, Samantha

    2018-06-14

    The aim of this meta-analysis was to quantify the direction and strength of associations between the Consideration of Future Consequences (CFC) scale and intended and actual engagement in three categories of health-related behaviour: health risk, health promotive, and illness preventative/detective behaviour. A systematic literature search was conducted to identify studies that measured CFC and health behaviour. In total, sixty-four effect sizes were extracted from 53 independent samples. Effect sizes were synthesised using a random-effects model. Aggregate effect sizes for all behaviour categories were significant, albeit small in magnitude. There were no significant moderating effects of the length of CFC scale (long vs. short), population type (college students vs. non-college students), mean age, or sex proportion of study samples. CFC reliability and study quality score significantly moderated the overall association between CFC and health risk behaviour only. The magnitude of effect sizes is comparable to associations between health behaviour and other individual difference variables, such as the Big Five personality traits. The findings indicate that CFC is an important construct to consider in research on engagement in health risk behaviour in particular. Future research is needed to examine the optimal approach by which to apply the findings to behavioural interventions.

  2. Predictors of treatment dropout in self-guided web-based interventions for depression: an 'individual patient data' meta-analysis.

    PubMed

    Karyotaki, E; Kleiboer, A; Smit, F; Turner, D T; Pastor, A M; Andersson, G; Berger, T; Botella, C; Breton, J M; Carlbring, P; Christensen, H; de Graaf, E; Griffiths, K; Donker, T; Farrer, L; Huibers, M J H; Lenndin, J; Mackinnon, A; Meyer, B; Moritz, S; Riper, H; Spek, V; Vernmark, K; Cuijpers, P

    2015-10-01

    It is well known that web-based interventions can be effective treatments for depression. However, dropout rates in web-based interventions are typically high, especially in self-guided web-based interventions. Rigorous empirical evidence regarding factors influencing dropout in self-guided web-based interventions is lacking due to small study sample sizes. In this paper we examined predictors of dropout in an individual patient data meta-analysis to gain a better understanding of who may benefit from these interventions. A comprehensive literature search for all randomized controlled trials (RCTs) of psychotherapy for adults with depression from 2006 to January 2013 was conducted. Next, we approached authors to collect the primary data of the selected studies. Predictors of dropout, such as socio-demographic, clinical, and intervention characteristics were examined. Data from 2705 participants across ten RCTs of self-guided web-based interventions for depression were analysed. The multivariate analysis indicated that male gender [relative risk (RR) 1.08], lower educational level (primary education, RR 1.26) and co-morbid anxiety symptoms (RR 1.18) significantly increased the risk of dropping out, while for every additional 4 years of age, the risk of dropping out significantly decreased (RR 0.94). Dropout can be predicted by several variables and is not randomly distributed. This knowledge may inform tailoring of online self-help interventions to prevent dropout in identified groups at risk.

  3. Abdominal Obesity and Risk of Hip Fracture: A Systematic Review and Meta-Analysis of Prospective Studies.

    PubMed

    Sadeghi, Omid; Saneei, Parvaneh; Nasiri, Morteza; Larijani, Bagher; Esmaillzadeh, Ahmad

    2017-09-01

    Data on the association between general obesity and hip fracture were summarized in a 2013 meta-analysis; however, to our knowledge, no study has examined the association between abdominal obesity and the risk of hip fracture. The present systematic review and meta-analysis of prospective studies was undertaken to summarize the association between abdominal obesity and the risk of hip fracture. We searched online databases for relevant publications up to February 2017, using relevant keywords. In total, 14 studies were included in the systematic review and 9 studies, with a total sample size of 295,674 individuals (129,964 men and 165,703 women), were included in the meta-analysis. Participants were apparently healthy and aged ≥40 y. We found that abdominal obesity (defined by various waist-hip ratios) was positively associated with the risk of hip fracture (combined RR: 1.24, 95% CI: 1.05, 1.46, P = 0.01). Combining 8 effect sizes from 6 studies, we noted a marginally significant positive association between abdominal obesity (defined by various waist circumferences) and the risk of hip fracture (combined RR: 1.36; 95% CI: 0.97, 1.89, P = 0.07). This association became significant in a fixed-effects model (combined effect size: 1.40, 95% CI: 1.25, 1.58, P < 0.001). Based on 5 effect sizes, we found that a 0.1-U increase in the waist-hip ratio was associated with a 16% increase in the risk of hip fracture (combined RR: 1.16, 95% CI: 1.04, 1.29, P = 0.007), whereas a 10-cm increase in waist circumference was not significantly associated with a higher risk of hip fracture (combined RR: 1.13, 95% CI: 0.94, 1.36, P = 0.19). This association became significant, however, when we applied a fixed-effects model (combined effect size: 1.21, 95% CI: 1.15, 1.27, P < 0.001). We found that abdominal obesity was associated with a higher risk of hip fracture in 295,674 individuals. Further studies are needed to test whether there are associations between abdominal obesity and fractures at other bone sites. © 2017 American Society for Nutrition.

  4. Fall Risk Assessment Through Automatic Combination of Clinical Fall Risk Factors and Body-Worn Sensor Data.

    PubMed

    Greene, Barry R; Redmond, Stephen J; Caulfield, Brian

    2017-05-01

    Falls are the leading global cause of accidental death and disability in older adults and are the most common cause of injury and hospitalization. Accurate, early identification of patients at risk of falling, could lead to timely intervention and a reduction in the incidence of fall-related injury and associated costs. We report a statistical method for fall risk assessment using standard clinical fall risk factors (N = 748). We also report a means of improving this method by automatically combining it, with a fall risk assessment algorithm based on inertial sensor data and the timed-up-and-go test. Furthermore, we provide validation data on the sensor-based fall risk assessment method using a statistically independent dataset. Results obtained using cross-validation on a sample of 292 community dwelling older adults suggest that a combined clinical and sensor-based approach yields a classification accuracy of 76.0%, compared to either 73.6% for sensor-based assessment alone, or 68.8% for clinical risk factors alone. Increasing the cohort size by adding an additional 130 subjects from a separate recruitment wave (N = 422), and applying the same model building and validation method, resulted in a decrease in classification performance (68.5% for combined classifier, 66.8% for sensor data alone, and 58.5% for clinical data alone). This suggests that heterogeneity between cohorts may be a major challenge when attempting to develop fall risk assessment algorithms which generalize well. Independent validation of the sensor-based fall risk assessment algorithm on an independent cohort of 22 community dwelling older adults yielded a classification accuracy of 72.7%. Results suggest that the present method compares well to previously reported sensor-based fall risk assessment methods in assessing falls risk. Implementation of objective fall risk assessment methods on a large scale has the potential to improve quality of care and lead to a reduction in associated hospital costs, due to fewer admissions and reduced injuries due to falling.

  5. Quantification of short and long asbestos fibers to assess asbestos exposure: a review of fiber size toxicity

    PubMed Central

    2014-01-01

    The fibrogenicity and carcinogenicity of asbestos fibers are dependent on several fiber parameters including fiber dimensions. Based on the WHO (World Health Organization) definition, the current regulations focalise on long asbestos fibers (LAF) (Length: L ≥ 5 μm, Diameter: D < 3 μm and L/D ratio > 3). However air samples contain short asbestos fibers (SAF) (L < 5 μm). In a recent study we found that several air samples collected in buildings with asbestos containing materials (ACM) were composed only of SAF, sometimes in a concentration of ≥10 fibers.L−1. This exhaustive review focuses on available information from peer-review publications on the size-dependent pathogenetic effects of asbestos fibers reported in experimental in vivo and in vitro studies. In the literature, the findings that SAF are less pathogenic than LAF are based on experiments where a cut-off of 5 μm was generally made to differentiate short from long asbestos fibers. Nevertheless, the value of 5 μm as the limit for length is not based on scientific evidence, but is a limit for comparative analyses. From this review, it is clear that the pathogenicity of SAF cannot be completely ruled out, especially in high exposure situations. Therefore, the presence of SAF in air samples appears as an indicator of the degradation of ACM and inclusion of their systematic search should be considered in the regulation. Measurement of these fibers in air samples will then make it possible to identify pollution and anticipate health risk. PMID:25043725

  6. Lack of Association Between Maternal or Neonatal Vitamin D Status and Risk of Childhood Type 1 Diabetes: A Scandinavian Case-Cohort Study.

    PubMed

    Thorsen, Steffen U; Mårild, Karl; Olsen, Sjurdur F; Holst, Klaus K; Tapia, German; Granström, Charlotta; Halldorsson, Thorhallur I; Cohen, Arieh S; Haugen, Margaretha; Lundqvist, Marika; Skrivarhaug, Torild; Njølstad, Pål R; Joner, Geir; Magnus, Per; Størdal, Ketil; Svensson, Jannet; Stene, Lars C

    2018-06-01

    Studies on vitamin D status during pregnancy and risk of type 1 diabetes mellitus (T1D) lack consistency and are limited by small sample sizes or single measures of 25-hydroxyvitamin D (25(OH)D). We investigated whether average maternal 25(OH)D plasma concentrations during pregnancy are associated with risk of childhood T1D. In a case-cohort design, we identified 459 children with T1D and a random sample (n = 1,561) from the Danish National Birth Cohort (n = 97,127) and Norwegian Mother and Child Cohort Study (n = 113,053). Participants were born between 1996 and 2009. The primary exposure was the estimated average 25(OH)D concentration, based on serial samples from the first trimester until delivery and on umbilical cord plasma. We estimated hazard ratios using weighted Cox regression adjusting for multiple confounders. The adjusted hazard ratio for T1D per 10-nmol/L increase in the estimated average 25(OH)D concentration was 1.00 (95% confidence interval: 0.90, 1.10). Results were consistent in both cohorts, in multiple sensitivity analyses, and when we analyzed mid-pregnancy or cord blood separately. In conclusion, our large study demonstrated that normal variation in maternal or neonatal 25(OH)D is unlikely to have a clinically important effect on risk of childhood T1D.

  7. The Risk of Bias in Randomized Trials in General Dentistry Journals.

    PubMed

    Hinton, Stephanie; Beyari, Mohammed M; Madden, Kim; Lamfon, Hanadi A

    2015-01-01

    The use of a randomized controlled trial (RCT) research design is considered the gold standard for conducting evidence-based clinical research. In this present study, we aimed to assess the quality of RCTs in dentistry and create a general foundation for evidence-based dentistry on which to perform subsequent RCTs. We conducted a systematic assessment of bias of RCTs in seven general dentistry journals published between January 2011 and March 2012. We extracted study characteristics in duplicate and assessed each trial's quality using the Cochrane Risk of Bias tool. We compared risk of bias across studies graphically. Among 1,755 studies across seven journals, we identified 67 RCTs. Many included studies were conducted in Europe (39%), with an average sample size of 358 participants. These studies included 52% female participants and the maximum follow-up period was 13 years. Overall, we found a high percentage of unclear risk of bias among included RCTs, indicating poor quality of reporting within the included studies. An overall high proportion of trials with an "unclear risk of bias" suggests the need for better quality of reporting in dentistry. As such, key concepts in dental research and future trials should focus on high-quality reporting.

  8. Do icon arrays help reduce denominator neglect?

    PubMed

    Garcia-Retamero, Rocio; Galesic, Mirta; Gigerenzer, Gerd

    2010-01-01

    Denominator neglect is the focus on the number of times a target event has happened (e.g., the number of treated and nontreated patients who die) without considering the overall number of opportunities for it to happen (e.g., the overall number of treated and nontreated patients). In 2 studies, we addressed the effect of denominator neglect in problems involving treatment risk reduction where samples of treated and non-treated patients and the relative risk reduction were of different sizes. We also tested whether using icon arrays helps people take these different sample sizes into account. We especially focused on older adults, who are often more disadvantaged when making decisions about their health. . Study 1 was conducted on a laboratory sample using a within-subjects design; study 2 was conducted on a nonstudent sample interviewed through the Web using a between-subjects design. Accuracy of understanding risk reduction. Participants often paid too much attention to numerators and insufficient attention to denominators when numerical information about treatment risk reduction was provided. Adding icon arrays to the numerical information, however, drew participants' attention to the denominators and helped them make more accurate assessments of treatment risk reduction. Icon arrays were equally helpful to younger and older adults. Building on previous research showing that problems with understanding numerical information often do not reside in the mind but in the representation of the problem, the results show that icon arrays are an effective method of eliminating denominator neglect.

  9. Age-Specific Injury Risk Curves for Distributed, Anterior Thoracic Loading of Various Sizes of Adults Based on Sternal Deflections.

    PubMed

    Mertz, Harold J; Prasad, Priya; Dalmotas, Dainius J; Irwin, Annette L

    2016-11-01

    Injury Risk Curves are developed from cadaver data for sternal deflections produced by anterior, distributed chest loads for a 25, 45, 55, 65 and 75 year-old Small Female, Mid-Size Male and Large Male based on the variations of bone strengths with age. These curves show that the risk of AIS ≥ 3 thoracic injury increases with the age of the person. This observation is consistent with NASS data of frontal accidents which shows that older unbelted drivers have a higher risk of AIS ≥ 3 chest injury than younger drivers.

  10. Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products

    NASA Astrophysics Data System (ADS)

    Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun

    2011-10-01

    To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."

  11. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    PubMed

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  12. Negligible impact of rare autoimmune-locus coding-region variants on missing heritability.

    PubMed

    Hunt, Karen A; Mistry, Vanisha; Bockett, Nicholas A; Ahmad, Tariq; Ban, Maria; Barker, Jonathan N; Barrett, Jeffrey C; Blackburn, Hannah; Brand, Oliver; Burren, Oliver; Capon, Francesca; Compston, Alastair; Gough, Stephen C L; Jostins, Luke; Kong, Yong; Lee, James C; Lek, Monkol; MacArthur, Daniel G; Mansfield, John C; Mathew, Christopher G; Mein, Charles A; Mirza, Muddassar; Nutland, Sarah; Onengut-Gumuscu, Suna; Papouli, Efterpi; Parkes, Miles; Rich, Stephen S; Sawcer, Steven; Satsangi, Jack; Simmonds, Matthew J; Trembath, Richard C; Walker, Neil M; Wozniak, Eva; Todd, John A; Simpson, Michael A; Plagnol, Vincent; van Heel, David A

    2013-06-13

    Genome-wide association studies (GWAS) have identified common variants of modest-effect size at hundreds of loci for common autoimmune diseases; however, a substantial fraction of heritability remains unexplained, to which rare variants may contribute. To discover rare variants and test them for association with a phenotype, most studies re-sequence a small initial sample size and then genotype the discovered variants in a larger sample set. This approach fails to analyse a large fraction of the rare variants present in the entire sample set. Here we perform simultaneous amplicon-sequencing-based variant discovery and genotyping for coding exons of 25 GWAS risk genes in 41,911 UK residents of white European origin, comprising 24,892 subjects with six autoimmune disease phenotypes and 17,019 controls, and show that rare coding-region variants at known loci have a negligible role in common autoimmune disease susceptibility. These results do not support the rare-variant synthetic genome-wide-association hypothesis (in which unobserved rare causal variants lead to association detected at common tag variants). Many known autoimmune disease risk loci contain multiple, independently associated, common and low-frequency variants, and so genes at these loci are a priori stronger candidates for harbouring rare coding-region variants than other genes. Our data indicate that the missing heritability for common autoimmune diseases may not be attributable to the rare coding-region variant portion of the allelic spectrum, but perhaps, as others have proposed, may be a result of many common-variant loci of weak effect.

  13. Regulatory requirements for clinical trial and marketing authorisation application for cell-based medicinal products.

    PubMed

    Salmikangas, P; Flory, E; Reinhardt, J; Hinz, T; Maciulaitis, R

    2010-01-01

    The new era of regenerative medicine has led to rapid development of new innovative therapies especially for diseases and tissue/organ defects for which traditional therapies and medicinal products have not provided satisfactory outcome. Although the clinical use and developments of cell-based medicinal products (CBMPs) could be witnessed already for a decade, robust scientific and regulatory provisions for these products have only recently been enacted. The new Regulation for Advanced Therapies (EC) 1394/2007 together with the revised Annex I, Part IV of Directive 2001/83/EC provides the new legal framework for CBMPs. The wide variety of cell-based products and the foreseen limitations (small sample sizes, short shelf life) vs. particular risks (microbiological purity, variability, immunogenicity, tumourigenicity) associated with CBMPs have called for a flexible, case-by-case regulatory approach for these products. Consequently, a risk-based approach has been developed to allow definition of the amount of scientific data needed for a Marketing Authorisation Application (MAA) of each CBMP. The article provides further insight into the initial risk evaluation, as well as to the quality, non-clinical, and clinical requirements of CBMPs. Special somatic cell therapies designed for active immunotherapy are also addressed.

  14. Decisions from Experience: Why Small Samples?

    ERIC Educational Resources Information Center

    Hertwig, Ralph; Pleskac, Timothy J.

    2010-01-01

    In many decisions we cannot consult explicit statistics telling us about the risks involved in our actions. In lieu of such data, we can arrive at an understanding of our dicey options by sampling from them. The size of the samples that we take determines, ceteris paribus, how good our choices will be. Studies of decisions from experience have…

  15. Assessing Disfluencies in School-Age Children Who Stutter: How Much Speech Is Enough?

    ERIC Educational Resources Information Center

    Gregg, Brent A.; Sawyer, Jean

    2015-01-01

    The question of what size speech sample is sufficient to accurately identify stuttering and its myriad characteristics is a valid one. Short samples have a risk of over- or underrepresenting disfluency types or characteristics. In recent years, there has been a trend toward using shorter samples because they are less time-consuming for…

  16. Canine fecal contamination in a metropolitan area (Milan, north-western Italy): prevalence of intestinal parasites and evaluation of health risks.

    PubMed

    Zanzani, Sergio Aurelio; Di Cerbo, Anna Rita; Gazzonis, Alessia Libera; Genchi, Marco; Rinaldi, Laura; Musella, Vincenzo; Cringoli, Giuseppe; Manfredi, Maria Teresa

    2014-01-01

    Intestinal parasites of dogs represent a serious threat to human health due to their zoonotic potential. Thus, metropolitan areas presenting high concentrations of pets and urban fecal contamination on public areas are at sanitary risk. Major aim of this survey was to determine prevalence of zoonotic parasites in dog fecal samples collected from public soil of Milan (north-western Italy). Differences in parasites prevalence distribution were explored by a geographical information system- (GIS-) based approach, and risk factors (human density, sizes of green parks, and dog areas) were considered. The metropolitan area was divided into 157 rectangular subareas and sampling was performed following a 1-kilometer straight transect. A total of 463 fecal samples were analyzed using centrifugation-flotation technique and ELISA to detect Giardia and Cryptosporidium coproantigens. A widespread fecal contamination of soil was highlighted, being fecal samples found in 86.8% of the subareas considered. The overall prevalence of intestinal parasites was 16.63%. Zoonotic parasites were found, such as Trichuris vulpis (3.67%), Toxocara canis (1.72%), Strongyloides stercoralis (0.86%), Ancylostomatidae (0.43%), and Dipylidium caninum (0.43%). Giardia duodenalis was the most prevalent zoonotic protozoa (11.06%), followed by Cryptosporidium (1.10%). Faeces from subareas characterized by broad green areas showed to be particularly prone to infection.

  17. Canine Fecal Contamination in a Metropolitan Area (Milan, North-Western Italy): Prevalence of Intestinal Parasites and Evaluation of Health Risks

    PubMed Central

    Zanzani, Sergio Aurelio; Di Cerbo, Anna Rita; Gazzonis, Alessia Libera; Genchi, Marco; Rinaldi, Laura; Musella, Vincenzo; Cringoli, Giuseppe

    2014-01-01

    Intestinal parasites of dogs represent a serious threat to human health due to their zoonotic potential. Thus, metropolitan areas presenting high concentrations of pets and urban fecal contamination on public areas are at sanitary risk. Major aim of this survey was to determine prevalence of zoonotic parasites in dog fecal samples collected from public soil of Milan (north-western Italy). Differences in parasites prevalence distribution were explored by a geographical information system- (GIS-) based approach, and risk factors (human density, sizes of green parks, and dog areas) were considered. The metropolitan area was divided into 157 rectangular subareas and sampling was performed following a 1-kilometer straight transect. A total of 463 fecal samples were analyzed using centrifugation-flotation technique and ELISA to detect Giardia and Cryptosporidium coproantigens. A widespread fecal contamination of soil was highlighted, being fecal samples found in 86.8% of the subareas considered. The overall prevalence of intestinal parasites was 16.63%. Zoonotic parasites were found, such as Trichuris vulpis (3.67%), Toxocara canis (1.72%), Strongyloides stercoralis (0.86%), Ancylostomatidae (0.43%), and Dipylidium caninum (0.43%). Giardia duodenalis was the most prevalent zoonotic protozoa (11.06%), followed by Cryptosporidium (1.10%). Faeces from subareas characterized by broad green areas showed to be particularly prone to infection. PMID:25478583

  18. [Ecological Correlates of Cardiovascular Disease Risk in Korean Blue-collar Workers: A Multi-level Study].

    PubMed

    Hwang, Won Ju; Park, Yunhee

    2015-12-01

    The purpose of this study was to investigate individual and organizational level of cardiovascular disease (CVD) risk factors associated with CVD risk in Korean blue-collar workers working in small sized companies. Self-report questionnaires and blood sampling for lipid and glucose were collected from 492 workers in 31 small sized companies in Korea. Multilevel modeling was conducted to estimate effects of related factors at the individual and organizational level. Multilevel regression analysis showed that workers in the workplace having a cafeteria had 1.81 times higher CVD risk after adjusting for factors at the individual level (p=.022). The explanatory power of variables related to organizational level variances in CVD risk was 17.1%. The results of this study indicate that differences in the CVD risk were related to organizational factors. It is necessary to consider not only individual factors but also organizational factors when planning a CVD risk reduction program. The factors caused by having cafeteria in the workplace can be reduced by improvement in the CVD-related risk environment, therefore an organizational-level intervention approach should be available to reduce CVD risk of workers in small sized companies in Korea.

  19. A computer program for sample size computations for banding studies

    USGS Publications Warehouse

    Wilson, K.R.; Nichols, J.D.; Hines, J.E.

    1989-01-01

    Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.

  20. Risk-Based Sampling: I Don't Want to Weight in Vain.

    PubMed

    Powell, Mark R

    2015-12-01

    Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.

  1. Capturing heterogeneity: The role of a study area's extent for estimating mean throughfall

    NASA Astrophysics Data System (ADS)

    Zimmermann, Alexander; Voss, Sebastian; Metzger, Johanna Clara; Hildebrandt, Anke; Zimmermann, Beate

    2016-11-01

    The selection of an appropriate spatial extent of a sampling plot is one among several important decisions involved in planning a throughfall sampling scheme. In fact, the choice of the extent may determine whether or not a study can adequately characterize the hydrological fluxes of the studied ecosystem. Previous attempts to optimize throughfall sampling schemes focused on the selection of an appropriate sample size, support, and sampling design, while comparatively little attention has been given to the role of the extent. In this contribution, we investigated the influence of the extent on the representativeness of mean throughfall estimates for three forest ecosystems of varying stand structure. Our study is based on virtual sampling of simulated throughfall fields. We derived these fields from throughfall data sampled in a simply structured forest (young tropical forest) and two heterogeneous forests (old tropical forest, unmanaged mixed European beech forest). We then sampled the simulated throughfall fields with three common extents and various sample sizes for a range of events and for accumulated data. Our findings suggest that the size of the study area should be carefully adapted to the complexity of the system under study and to the required temporal resolution of the throughfall data (i.e. event-based versus accumulated). Generally, event-based sampling in complex structured forests (conditions that favor comparatively long autocorrelations in throughfall) requires the largest extents. For event-based sampling, the choice of an appropriate extent can be as important as using an adequate sample size.

  2. Particle size distributions of metal and non-metal elements in an urban near-highway environment

    EPA Science Inventory

    Determination of the size-resolved elemental composition of near-highway particulate matter (PM) is important due to the health and environmental risks it poses. In the current study, twelve 24 h PM samples were collected (in July-August 2006) using a low-pressure impactor positi...

  3. Design of the value of imaging in enhancing the wellness of your heart (VIEW) trial and the impact of uncertainty on power.

    PubMed

    Ambrosius, Walter T; Polonsky, Tamar S; Greenland, Philip; Goff, David C; Perdue, Letitia H; Fortmann, Stephen P; Margolis, Karen L; Pajewski, Nicholas M

    2012-04-01

    Although observational evidence has suggested that the measurement of coronary artery calcium (CAC) may improve risk stratification for cardiovascular events and thus help guide the use of lipid-lowering therapy, this contention has not been evaluated within the context of a randomized trial. The Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) trial is proposed as a randomized study in participants at low intermediate risk of future coronary heart disease (CHD) events to evaluate whether CAC testing leads to improved patient outcomes. To describe the challenges encountered in designing a prototypical screening trial and to examine the impact of uncertainty on power. The VIEW trial was designed as an effectiveness clinical trial to examine the benefit of CAC testing to guide therapy on a primary outcome consisting of a composite of nonfatal myocardial infarction, probable or definite angina with revascularization, resuscitated cardiac arrest, nonfatal stroke (not transient ischemic attack (TIA)), CHD death, stroke death, other atherosclerotic death, or other cardiovascular disease (CVD) death. Many critical choices were faced in designing the trial, including (1) the choice of primary outcome, (2) the choice of therapy, (3) the target population with corresponding ethical issues, (4) specifications of assumptions for sample size calculations, and (5) impact of uncertainty in these assumptions on power/sample size determination. We have proposed a sample size of 30,000 (800 events), which provides 92.7% power. Alternatively, sample sizes of 20,228 (539 events), 23,138 (617 events), and 27,078 (722 events) provide 80%, 85%, and 90% power. We have also allowed for uncertainty in our assumptions by computing average power integrated over specified prior distributions. This relaxation of specificity indicates a reduction in power, dropping to 89.9% (95% confidence interval (CI): 89.8-89.9) for a sample size of 30,000. Samples sizes of 20,228, 23,138, and 27,078 provide power of 78.0% (77.9-78.0), 82.5% (82.5-82.6), and 87.2% (87.2-87.3), respectively. These power estimates are dependent on form and parameters of the prior distributions. Despite the pressing need for a randomized trial to evaluate the utility of CAC testing, conduct of such a trial requires recruiting a large patient population, making efficiency of critical importance. The large sample size is primarily due to targeting a study population at relatively low risk of a CVD event. Our calculations also illustrate the importance of formally considering uncertainty in power calculations of large trials as standard power calculations may tend to overestimate power.

  4. Design of the Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) Trial and the Impact of Uncertainty on Power

    PubMed Central

    Ambrosius, Walter T.; Polonsky, Tamar S.; Greenland, Philip; Goff, David C.; Perdue, Letitia H.; Fortmann, Stephen P.; Margolis, Karen L.; Pajewski, Nicholas M.

    2014-01-01

    Background Although observational evidence has suggested that the measurement of CAC may improve risk stratification for cardiovascular events and thus help guide the use of lipid-lowering therapy, this contention has not been evaluated within the context of a randomized trial. The Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) trial is proposed as a randomized study in participants at low intermediate risk of future coronary heart disease (CHD) events to evaluate whether coronary artery calcium (CAC) testing leads to improved patient outcomes. Purpose To describe the challenges encountered in designing a prototypical screening trial and to examine the impact of uncertainty on power. Methods The VIEW trial was designed as an effectiveness clinical trial to examine the benefit of CAC testing to guide therapy on a primary outcome consisting of a composite of non-fatal myocardial infarction, probable or definite angina with revascularization, resuscitated cardiac arrest, non-fatal stroke (not transient ischemic attack (TIA)), CHD death, stroke death, other atherosclerotic death, or other cardiovascular disease (CVD) death. Many critical choices were faced in designing the trial, including: (1) the choice of primary outcome, (2) the choice of therapy, (3) the target population with corresponding ethical issues, (4) specifications of assumptions for sample size calculations, and (5) impact of uncertainty in these assumptions on power/sample size determination. Results We have proposed a sample size of 30,000 (800 events) which provides 92.7% power. Alternatively, sample sizes of 20,228 (539 events), 23,138 (617 events) and 27,078 (722 events) provide 80, 85, and 90% power. We have also allowed for uncertainty in our assumptions by computing average power integrated over specified prior distributions. This relaxation of specificity indicates a reduction in power, dropping to 89.9% (95% confidence interval (CI): 89.8 to 89.9) for a sample size of 30,000. Samples sizes of 20,228, 23,138, and 27,078 provide power of 78.0% (77.9 to 78.0), 82.5% (82.5 to 82.6), and 87.2% (87.2 to 87.3), respectively. Limitations These power estimates are dependent on form and parameters of the prior distributions. Conclusions Despite the pressing need for a randomized trial to evaluate the utility of CAC testing, conduct of such a trial requires recruiting a large patient population, making efficiency of critical importance. The large sample size is primarily due to targeting a study population at relatively low risk of a CVD event. Our calculations also illustrate the importance of formally considering uncertainty in power calculations of large trials as standard power calculations may tend to overestimate power. PMID:22333998

  5. Online alcohol interventions: a systematic review.

    PubMed

    White, Angela; Kavanagh, David; Stallman, Helen; Klein, Britt; Kay-Lambkin, Frances; Proudfoot, Judy; Drennan, Judy; Connor, Jason; Baker, Amanda; Hines, Emily; Young, Ross

    2010-12-19

    There has been a significant increase in the availability of online programs for alcohol problems. A systematic review of the research evidence underpinning these programs is timely. Our objective was to review the efficacy of online interventions for alcohol misuse. Systematic searches of Medline, PsycINFO, Web of Science, and Scopus were conducted for English abstracts (excluding dissertations) published from 1998 onward. Search terms were: (1) Internet, Web*; (2) online, computer*; (3) alcohol*; and (4) E\\effect*, trial*, random* (where * denotes a wildcard). Forward and backward searches from identified papers were also conducted. Articles were included if (1) the primary intervention was delivered and accessed via the Internet, (2) the intervention focused on moderating or stopping alcohol consumption, and (3) the study was a randomized controlled trial of an alcohol-related screen, assessment, or intervention. The literature search initially yielded 31 randomized controlled trials (RCTs), 17 of which met inclusion criteria. Of these 17 studies, 12 (70.6%) were conducted with university students, and 11 (64.7%) specifically focused on at-risk, heavy, or binge drinkers. Sample sizes ranged from 40 to 3216 (median 261), with 12 (70.6%) studies predominantly involving brief personalized feedback interventions. Using published data, effect sizes could be extracted from 8 of the 17 studies. In relation to alcohol units per week or month and based on 5 RCTs where a measure of alcohol units per week or month could be extracted, differential effect sizes to posttreatment ranged from 0.02 to 0.81 (mean 0.42, median 0.54). Pre-post effect sizes for brief personalized feedback interventions ranged from 0.02 to 0.81, and in 2 multi-session modularized interventions, a pre-post effect size of 0.56 was obtained in both. Pre-post differential effect sizes for peak blood alcohol concentrations (BAC) ranged from 0.22 to 0.88, with a mean effect size of 0.66. The available evidence suggests that users can benefit from online alcohol interventions and that this approach could be particularly useful for groups less likely to access traditional alcohol-related services, such as women, young people, and at-risk users. However, caution should be exercised given the limited number of studies allowing extraction of effect sizes, the heterogeneity of outcome measures and follow-up periods, and the large proportion of student-based studies. More extensive RCTs in community samples are required to better understand the efficacy of specific online alcohol approaches, program dosage, the additive effect of telephone or face-to-face interventions, and effective strategies for their dissemination and marketing.

  6. Online Alcohol Interventions: A Systematic Review

    PubMed Central

    Kavanagh, David; Stallman, Helen; Klein, Britt; Kay-Lambkin, Frances; Proudfoot, Judy; Drennan, Judy; Connor, Jason; Baker, Amanda; Hines, Emily; Young, Ross

    2010-01-01

    Background There has been a significant increase in the availability of online programs for alcohol problems. A systematic review of the research evidence underpinning these programs is timely. Objectives Our objective was to review the efficacy of online interventions for alcohol misuse. Systematic searches of Medline, PsycINFO, Web of Science, and Scopus were conducted for English abstracts (excluding dissertations) published from 1998 onward. Search terms were: (1) Internet, Web*; (2) online, computer*; (3) alcohol*; and (4) E\\effect*, trial*, random* (where * denotes a wildcard). Forward and backward searches from identified papers were also conducted. Articles were included if (1) the primary intervention was delivered and accessed via the Internet, (2) the intervention focused on moderating or stopping alcohol consumption, and (3) the study was a randomized controlled trial of an alcohol-related screen, assessment, or intervention. Results The literature search initially yielded 31 randomized controlled trials (RCTs), 17 of which met inclusion criteria. Of these 17 studies, 12 (70.6%) were conducted with university students, and 11 (64.7%) specifically focused on at-risk, heavy, or binge drinkers. Sample sizes ranged from 40 to 3216 (median 261), with 12 (70.6%) studies predominantly involving brief personalized feedback interventions. Using published data, effect sizes could be extracted from 8 of the 17 studies. In relation to alcohol units per week or month and based on 5 RCTs where a measure of alcohol units per week or month could be extracted, differential effect sizes to posttreatment ranged from 0.02 to 0.81 (mean 0.42, median 0.54). Pre-post effect sizes for brief personalized feedback interventions ranged from 0.02 to 0.81, and in 2 multi-session modularized interventions, a pre-post effect size of 0.56 was obtained in both. Pre-post differential effect sizes for peak blood alcohol concentrations (BAC) ranged from 0.22 to 0.88, with a mean effect size of 0.66. Conclusions The available evidence suggests that users can benefit from online alcohol interventions and that this approach could be particularly useful for groups less likely to access traditional alcohol-related services, such as women, young people, and at-risk users. However, caution should be exercised given the limited number of studies allowing extraction of effect sizes, the heterogeneity of outcome measures and follow-up periods, and the large proportion of student-based studies. More extensive RCTs in community samples are required to better understand the efficacy of specific online alcohol approaches, program dosage, the additive effect of telephone or face-to-face interventions, and effective strategies for their dissemination and marketing. PMID:21169175

  7. Sample allocation balancing overall representativeness and stratum precision.

    PubMed

    Diaz-Quijano, Fredi Alexander

    2018-05-07

    In large-scale surveys, it is often necessary to distribute a preset sample size among a number of strata. Researchers must make a decision between prioritizing overall representativeness or precision of stratum estimates. Hence, I evaluated different sample allocation strategies based on stratum size. The strategies evaluated herein included allocation proportional to stratum population; equal sample for all strata; and proportional to the natural logarithm, cubic root, and square root of the stratum population. This study considered the fact that, from a preset sample size, the dispersion index of stratum sampling fractions is correlated with the population estimator error and the dispersion index of stratum-specific sampling errors would measure the inequality in precision distribution. Identification of a balanced and efficient strategy was based on comparing those both dispersion indices. Balance and efficiency of the strategies changed depending on overall sample size. As the sample to be distributed increased, the most efficient allocation strategies were equal sample for each stratum; proportional to the logarithm, to the cubic root, to square root; and that proportional to the stratum population, respectively. Depending on sample size, each of the strategies evaluated could be considered in optimizing the sample to keep both overall representativeness and stratum-specific precision. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Inadequacy of Conventional Grab Sampling for Remediation Decision-Making for Metal Contamination at Small-Arms Ranges.

    PubMed

    Clausen, J L; Georgian, T; Gardner, K H; Douglas, T A

    2018-01-01

    Research shows grab sampling is inadequate for evaluating military ranges contaminated with energetics because of their highly heterogeneous distribution. Similar studies assessing the heterogeneous distribution of metals at small-arms ranges (SAR) are lacking. To address this we evaluated whether grab sampling provides appropriate data for performing risk analysis at metal-contaminated SARs characterized with 30-48 grab samples. We evaluated the extractable metal content of Cu, Pb, Sb, and Zn of the field data using a Monte Carlo random resampling with replacement (bootstrapping) simulation approach. Results indicate the 95% confidence interval of the mean for Pb (432 mg/kg) at one site was 200-700 mg/kg with a data range of 5-4500 mg/kg. Considering the U.S. Environmental Protection Agency screening level for lead is 400 mg/kg, the necessity of cleanup at this site is unclear. Resampling based on populations of 7 and 15 samples, a sample size more realistic for the area yielded high false negative rates.

  9. Towards a Comparative Index of Seaport Climate-Risk: Development of Indicators from Open Data

    NASA Astrophysics Data System (ADS)

    McIntosh, R. D.; Becker, A.

    2016-02-01

    Seaports represent an example of coastal infrastructure that is at once critical to global trade, constrained to the land-sea interface, and exposed to weather and climate hazards. Seaports face impacts associated with projected changes in sea level, sedimentation, ocean chemistry, wave dynamics, temperature, precipitation, and storm frequency and intensity. Port decision-makers have the responsibility to enhance resilience against these impacts. At the multi-port (regional or national) scale, policy-makers must prioritize adaptation efforts to maximize the efficiency of limited physical and financial resources. Prioritization requires comparing across seaports, and comparison requires a standardized assessment method, but efforts to date have either been limited in scope to exposure-only assessments or limited in scale to evaluate one port in isolation from a system of ports. In order to better understand the distribution of risk across ports and to inform transportation resilience policy, we are developing a comparative assessment method to measure the relative climate-risk faced by a sample of ports. Our mixed-methods approach combines a quantitative, data-driven, indicator-based assessment with qualitative data collected via expert-elicitation. In this presentation, we identify and synthesize over 120 potential risk indicators from open data sources. Indicators represent exposure, sensitivity, and adaptive capacity for a pilot sample of 20 ports. Our exploratory data analysis, including Principal Component Analysis, uncovered sources of variance between individual ports and between indicators. Next steps include convening an expert panel representing the perspectives of multiple transportation system agencies to find consensus on a suite of robust indicators and metrics for maritime freight node climate risk assessment. The index will be refined based on expert feedback, the sample size expanded, and additional indicators sought from closed data sources. Developing standardized indicators from available data is an essential step in risk assessment, as robust indicators can help policy-makers monitor resilience strategy implementation, target and justify resource expenditure for adaptation schemes, communicate adaptation to stakeholders, and benchmark progress.

  10. A cluster-randomised, controlled trial to assess the impact of a workplace osteoporosis prevention intervention on the dietary and physical activity behaviours of working women: study protocol.

    PubMed

    Tan, Ai May; Lamontagne, Anthony D; Sarmugam, Rani; Howard, Peter

    2013-04-29

    Osteoporosis is a debilitating disease and its risk can be reduced through adequate calcium consumption and physical activity. This protocol paper describes a workplace-based intervention targeting behaviour change in premenopausal women working in sedentary occupations. A cluster-randomised design was used, comparing the efficacy of a tailored intervention to standard care. Workplaces were the clusters and units of randomisation and intervention. Sample size calculations incorporated the cluster design. Final number of clusters was determined to be 16, based on a cluster size of 20 and calcium intake parameters (effect size 250 mg, ICC 0.5 and standard deviation 290 mg) as it required the highest number of clusters.Sixteen workplaces were recruited from a pool of 97 workplaces and randomly assigned to intervention and control arms (eight in each). Women meeting specified inclusion criteria were then recruited to participate. Workplaces in the intervention arm received three participatory workshops and organisation wide educational activities. Workplaces in the control/standard care arm received print resources. Intervention workshops were guided by self-efficacy theory and included participatory activities such as goal setting, problem solving, local food sampling, exercise trials, group discussion and behaviour feedback.Outcomes measures were calcium intake (milligrams/day) and physical activity level (duration: minutes/week), measured at baseline, four weeks and six months post intervention. This study addresses the current lack of evidence for behaviour change interventions focussing on osteoporosis prevention. It addresses missed opportunities of using workplaces as a platform to target high-risk individuals with sedentary occupations. The intervention was designed to modify behaviour levels to bring about risk reduction. It is the first to address dietary and physical activity components each with unique intervention strategies in the context of osteoporosis prevention. The intervention used locally relevant behavioural strategies previously shown to support good outcomes in other countries. The combination of these elements have not been incorporated in similar studies in the past, supporting the study hypothesis that the intervention will be more efficacious than standard practice in osteoporosis prevention through improvements in calcium intake and physical activity.

  11. A cluster-randomised, controlled trial to assess the impact of a workplace osteoporosis prevention intervention on the dietary and physical activity behaviours of working women: study protocol

    PubMed Central

    2013-01-01

    Background Osteoporosis is a debilitating disease and its risk can be reduced through adequate calcium consumption and physical activity. This protocol paper describes a workplace-based intervention targeting behaviour change in premenopausal women working in sedentary occupations. Method/Design A cluster-randomised design was used, comparing the efficacy of a tailored intervention to standard care. Workplaces were the clusters and units of randomisation and intervention. Sample size calculations incorporated the cluster design. Final number of clusters was determined to be 16, based on a cluster size of 20 and calcium intake parameters (effect size 250 mg, ICC 0.5 and standard deviation 290 mg) as it required the highest number of clusters. Sixteen workplaces were recruited from a pool of 97 workplaces and randomly assigned to intervention and control arms (eight in each). Women meeting specified inclusion criteria were then recruited to participate. Workplaces in the intervention arm received three participatory workshops and organisation wide educational activities. Workplaces in the control/standard care arm received print resources. Intervention workshops were guided by self-efficacy theory and included participatory activities such as goal setting, problem solving, local food sampling, exercise trials, group discussion and behaviour feedback. Outcomes measures were calcium intake (milligrams/day) and physical activity level (duration: minutes/week), measured at baseline, four weeks and six months post intervention. Discussion This study addresses the current lack of evidence for behaviour change interventions focussing on osteoporosis prevention. It addresses missed opportunities of using workplaces as a platform to target high-risk individuals with sedentary occupations. The intervention was designed to modify behaviour levels to bring about risk reduction. It is the first to address dietary and physical activity components each with unique intervention strategies in the context of osteoporosis prevention. The intervention used locally relevant behavioural strategies previously shown to support good outcomes in other countries. The combination of these elements have not been incorporated in similar studies in the past, supporting the study hypothesis that the intervention will be more efficacious than standard practice in osteoporosis prevention through improvements in calcium intake and physical activity. PMID:23627684

  12. Sample size determination in group-sequential clinical trials with two co-primary endpoints

    PubMed Central

    Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

    2014-01-01

    We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

  13. Planning Community-Based Assessments of HIV Educational Intervention Programs in Sub-Saharan Africa

    ERIC Educational Resources Information Center

    Kelcey, Ben; Shen, Zuchao

    2017-01-01

    A key consideration in planning studies of community-based HIV education programs is identifying a sample size large enough to ensure a reasonable probability of detecting program effects if they exist. Sufficient sample sizes for community- or group-based designs are proportional to the correlation or similarity of individuals within communities.…

  14. Sample size considerations for paired experimental design with incomplete observations of continuous outcomes.

    PubMed

    Zhu, Hong; Xu, Xiaohan; Ahn, Chul

    2017-01-01

    Paired experimental design is widely used in clinical and health behavioral studies, where each study unit contributes a pair of observations. Investigators often encounter incomplete observations of paired outcomes in the data collected. Some study units contribute complete pairs of observations, while the others contribute either pre- or post-intervention observations. Statistical inference for paired experimental design with incomplete observations of continuous outcomes has been extensively studied in literature. However, sample size method for such study design is sparsely available. We derive a closed-form sample size formula based on the generalized estimating equation approach by treating the incomplete observations as missing data in a linear model. The proposed method properly accounts for the impact of mixed structure of observed data: a combination of paired and unpaired outcomes. The sample size formula is flexible to accommodate different missing patterns, magnitude of missingness, and correlation parameter values. We demonstrate that under complete observations, the proposed generalized estimating equation sample size estimate is the same as that based on the paired t-test. In the presence of missing data, the proposed method would lead to a more accurate sample size estimate comparing with the crude adjustment. Simulation studies are conducted to evaluate the finite-sample performance of the generalized estimating equation sample size formula. A real application example is presented for illustration.

  15. Estimating population size with correlated sampling unit estimates

    Treesearch

    David C. Bowden; Gary C. White; Alan B. Franklin; Joseph L. Ganey

    2003-01-01

    Finite population sampling theory is useful in estimating total population size (abundance) from abundance estimates of each sampled unit (quadrat). We develop estimators that allow correlated quadrat abundance estimates, even for quadrats in different sampling strata. Correlated quadrat abundance estimates based on mark–recapture or distance sampling methods occur...

  16. Risk factors for lower extremity injury: a review of the literature

    PubMed Central

    Murphy, D; Connolly, D; Beynnon, B

    2003-01-01

    Prospective studies on risk factors for lower extremity injury are reviewed. Many intrinsic and extrinsic risk factors have been implicated; however, there is little agreement with respect to the findings. Future prospective studies are needed using sufficient sample sizes of males and females, including collection of exposure data, and using established methods for identifying and classifying injury severity to conclusively determine addtional risk factors for lower extremity injury. PMID:12547739

  17. Reconciling PM10 analyses by different sampling methods for Iron King Mine tailings dust.

    PubMed

    Li, Xu; Félix, Omar I; Gonzales, Patricia; Sáez, Avelino Eduardo; Ela, Wendell P

    2016-03-01

    The overall project objective at the Iron King Mine Superfund site is to determine the level and potential risk associated with heavy metal exposure of the proximate population emanating from the site's tailings pile. To provide sufficient size-fractioned dust for multi-discipline research studies, a dust generator was built and is now being used to generate size-fractioned dust samples for toxicity investigations using in vitro cell culture and animal exposure experiments as well as studies on geochemical characterization and bioassay solubilization with simulated lung and gastric fluid extractants. The objective of this study is to provide a robust method for source identification by comparing the tailing sample produced by dust generator and that collected by MOUDI sampler. As and Pb concentrations of the PM10 fraction in the MOUDI sample were much lower than in tailing samples produced by the dust generator, indicating a dilution of Iron King tailing dust by dust from other sources. For source apportionment purposes, single element concentration method was used based on the assumption that the PM10 fraction comes from a background source plus the Iron King tailing source. The method's conclusion that nearly all arsenic and lead in the PM10 dust fraction originated from the tailings substantiates our previous Pb and Sr isotope study conclusion. As and Pb showed a similar mass fraction from Iron King for all sites suggesting that As and Pb have the same major emission source. Further validation of this simple source apportionment method is needed based on other elements and sites.

  18. Determination of Minimum Training Sample Size for Microarray-Based Cancer Outcome Prediction–An Empirical Assessment

    PubMed Central

    Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu

    2013-01-01

    The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920

  19. 77 FR 26292 - Risk Evaluation and Mitigation Strategy Assessments: Social Science Methodologies to Assess Goals...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-03

    ... determine endpoints; questionnaire design and analyses; and presentation of survey results. To date, FDA has..., the workshop will invest considerable time in identifying best methodological practices for conducting... sample, sample size, question design, process, and endpoints. Panel 2 will focus on alternatives to...

  20. Shame, pride, and suicidal ideation in a military clinical sample.

    PubMed

    Bryan, Craig J; Ray-Sannerud, Bobbie; Morrow, Chad E; Etienne, Neysa

    2013-05-01

    Suicide risk among U.S. military personnel has been increasing over the past decade. Fluid vulnerability theory (FVT; Rudd, 2006) posits that acute suicidal episodes increase in severity when trait-based (e.g., shame) and state-based (e.g., hopelessness) risk factors interact, especially among individuals who have been previously suicidal. In contrast, trait-based protective factors (e.g., pride) should buffer the deleterious effects of risk factors. 77 active duty military personnel (95% Air Force; 58.4% male, 39.0% female; 67.5% Caucasian, 19.5% African-American, 1.3% Native American, 1.3% Native Hawaiian/Pacific Islander, 1.3% Asian, and 5.2% other) engaged in outpatient mental health treatment completed self-report surveys of shame, hopelessness, pride, and suicidal ideation. Multiple generalized regression was utilized to test the associations and interactive effects of shame, hopelessness, and worst-point past suicidal ideation on severity of current suicidal ideation. Shame significantly interacted with hopelessness (B=-0.013, SE=0.004, p<0.001) and worst-point suicidal ideation (B=0.027, SE=0.010, p=0.010), augmenting each variable's effect on severity of current suicidal ideation. A significant three-way interaction among shame, worst-point suicidal ideation, and pride was also observed (B=-0.010, SE=0.0043, p=0.021), indicating that pride buffered the interactive effects of shame with worst-point suicidal ideation. Small sample size, cross-sectional design, and primarily Air Force sample. Among military outpatients with histories of severe suicidal episodes, pride buffers the effects of hopelessness on current suicidal ideation. Results are consistent with FVT. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Methodological approach for substantiating disease freedom in a heterogeneous small population. Application to ovine scrapie, a disease with a strong genetic susceptibility.

    PubMed

    Martinez, Marie-José; Durand, Benoit; Calavas, Didier; Ducrot, Christian

    2010-06-01

    Demonstrating disease freedom is becoming important in different fields including animal disease control. Most methods consider sampling only from a homogeneous population in which each animal has the same probability of becoming infected. In this paper, we propose a new methodology to calculate the probability of detecting the disease if it is present in a heterogeneous population of small size with potentially different risk groups, differences in risk being defined using relative risks. To calculate this probability, for each possible arrangement of the infected animals in the different groups, the probability that all the animals tested are test-negative given this arrangement is multiplied by the probability that this arrangement occurs. The probability formula is developed using the assumption of a perfect test and hypergeometric sampling for finite small size populations. The methodology is applied to scrapie, a disease affecting small ruminants and characterized in sheep by a strong genetic susceptibility defining different risk groups. It illustrates that the genotypes of the tested animals influence heavily the confidence level of detecting scrapie. The results present the statistical power for substantiating disease freedom in a small heterogeneous population as a function of the design prevalence, the structure of the sample tested, the structure of the herd and the associated relative risks. (c) 2010 Elsevier B.V. All rights reserved.

  2. Tungsten Carbide Grain Size Computation for WC-Co Dissimilar Welds

    NASA Astrophysics Data System (ADS)

    Zhou, Dongran; Cui, Haichao; Xu, Peiquan; Lu, Fenggui

    2016-06-01

    A "two-step" image processing method based on electron backscatter diffraction in scanning electron microscopy was used to compute the tungsten carbide (WC) grain size distribution for tungsten inert gas (TIG) welds and laser welds. Twenty-four images were collected on randomly set fields per sample located at the top, middle, and bottom of a cross-sectional micrograph. Each field contained 500 to 1500 WC grains. The images were recognized through clustering-based image segmentation and WC grain growth recognition. According to the WC grain size computation and experiments, a simple WC-WC interaction model was developed to explain the WC dissolution, grain growth, and aggregation in welded joints. The WC-WC interaction and blunt corners were characterized using scanning and transmission electron microscopy. The WC grain size distribution and the effects of heat input E on grain size distribution for the laser samples were discussed. The results indicate that (1) the grain size distribution follows a Gaussian distribution. Grain sizes at the top of the weld were larger than those near the middle and weld root because of power attenuation. (2) Significant WC grain growth occurred during welding as observed in the as-welded micrographs. The average grain size was 11.47 μm in the TIG samples, which was much larger than that in base metal 1 (BM1 2.13 μm). The grain size distribution curves for the TIG samples revealed a broad particle size distribution without fine grains. The average grain size (1.59 μm) in laser samples was larger than that in base metal 2 (BM2 1.01 μm). (3) WC-WC interaction exhibited complex plane, edge, and blunt corner characteristics during grain growth. A WC ( { 1 {bar{{1}}}00} ) to WC ( {0 1 1 {bar{{0}}}} ) edge disappeared and became a blunt plane WC ( { 10 1 {bar{{0}}}} ) , several grains with two- or three-sided planes and edges disappeared into a multi-edge, and a WC-WC merged.

  3. Characterization of pathogenic SORL1 genetic variants for association with Alzheimer’s disease: a clinical interpretation strategy

    PubMed Central

    Holstege, Henne; van der Lee, Sven J; Hulsman, Marc; Wong, Tsz Hang; van Rooij, Jeroen GJ; Weiss, Marjan; Louwersheimer, Eva; Wolters, Frank J; Amin, Najaf; Uitterlinden, André G; Hofman, Albert; Ikram, M Arfan; van Swieten, John C; Meijers-Heijboer, Hanne; van der Flier, Wiesje M; Reinders, Marcel JT; van Duijn, Cornelia M; Scheltens, Philip

    2017-01-01

    Accumulating evidence suggests that genetic variants in the SORL1 gene are associated with Alzheimer disease (AD), but a strategy to identify which variants are pathogenic is lacking. In a discovery sample of 115 SORL1 variants detected in 1908 Dutch AD cases and controls, we identified the variant characteristics associated with SORL1 variant pathogenicity. Findings were replicated in an independent sample of 103 SORL1 variants detected in 3193 AD cases and controls. In a combined sample of the discovery and replication samples, comprising 181 unique SORL1 variants, we developed a strategy to classify SORL1 variants into five subtypes ranging from pathogenic to benign. We tested this pathogenicity screen in SORL1 variants reported in two independent published studies. SORL1 variant pathogenicity is defined by the Combined Annotation Dependent Depletion (CADD) score and the minor allele frequency (MAF) reported by the Exome Aggregation Consortium (ExAC) database. Variants predicted strongly damaging (CADD score >30), which are extremely rare (ExAC-MAF <1 × 10−5) increased AD risk by 12-fold (95% CI 4.2–34.3; P=5 × 10−9). Protein-truncating SORL1 mutations were all unknown to ExAC and occurred exclusively in AD cases. More common SORL1 variants (ExAC-MAF≥1 × 10−5) were not associated with increased AD risk, even when predicted strongly damaging. Findings were independent of gender and the APOE-ε4 allele. High-risk SORL1 variants were observed in a substantial proportion of the AD cases analyzed (2%). Based on their effect size, we propose to consider high-risk SORL1 variants next to variants in APOE, PSEN1, PSEN2 and APP for personalized risk assessments in clinical practice. PMID:28537274

  4. Distributions of observed death tolls govern sensitivity to human fatalities

    PubMed Central

    Olivola, Christopher Y.; Sagara, Namika

    2009-01-01

    How we react to humanitarian crises, epidemics, and other tragic events involving the loss of human lives depends largely on the extent to which we are moved by the size of their associated death tolls. Many studies have demonstrated that people generally exhibit a diminishing sensitivity to the number of human fatalities and, equivalently, a preference for risky (vs. sure) alternatives in decisions under risk involving human losses. However, the reason for this tendency remains unknown. Here we show that the distributions of event-related death tolls that people observe govern their evaluations of, and risk preferences concerning, human fatalities. In particular, we show that our diminishing sensitivity to human fatalities follows from the fact that these death tolls are approximately power-law distributed. We further show that, by manipulating the distribution of mortality-related events that people observe, we can alter their risk preferences in decisions involving fatalities. Finally, we show that the tendency to be risk-seeking in mortality-related decisions is lower in countries in which high-mortality events are more frequently observed. Our results support a model of magnitude evaluation based on memory sampling and relative judgment. This model departs from the utility-based approaches typically encountered in psychology and economics in that it does not rely on stable, underlying value representations to explain valuation and choice, or on choice behavior to derive value functions. Instead, preferences concerning human fatalities emerge spontaneously from the distributions of sampled events and the relative nature of the evaluation process. PMID:20018778

  5. Size exclusion chromatography-gradients, an alternative approach to polymer gradient chromatography: 2. Separation of poly(meth)acrylates using a size exclusion chromatography-solvent/non-solvent gradient.

    PubMed

    Schollenberger, Martin; Radke, Wolfgang

    2011-10-28

    A gradient ranging from methanol to tetrahydrofuran (THF) was applied to a series of poly(methyl methacrylate) (PMMA) standards, using the recently developed concept of SEC-gradients. Contrasting to conventional gradients the samples eluted before the solvent, i.e. within the elution range typical for separations by SEC, however, the high molar mass PMMAs were retarded as compared to experiments on the same column using pure THF as the eluent. The molar mass dependence on retention volume showed a complex behaviour with a nearly molar mass independent elution for high molar masses. This molar mass dependence was explained in terms of solubility and size exclusion effects. The solubility based SEC-gradient was proven to be useful to separate PMMA and poly(n-butyl crylate) (PnBuA) from a poly(t-butyl crylate) (PtBuA) sample. These samples could be separated neither by SEC in THF, due to their very similar hydrodynamic volumes, nor by an SEC-gradient at adsorbing conditions, due to a too low selectivity. The example shows that SEC-gradients can be applied not only in adsorption/desorption mode, but also in precipitation/dissolution mode without risking blocking capillaries or breakthrough peaks. Thus, the new approach is a valuable alternative to conventional gradient chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Diabetes mellitus is an independent risk for gastroesophageal reflux disease among urban African Americans.

    PubMed

    Natalini, J; Palit, A; Sankineni, A; Friedenberg, F K

    2015-07-01

    An association between gastroesophageal reflux disease (GERD) and diabetes mellitus (DM) has been reported. Studies have not been population-based and have failed to include a representative sample of African American subjects. The aim of the study was to determine if DM is independently associated with GERD among urban African Americans. Single-center, population-based survey utilizing a complex, stratified sampling design. To obtain a simple random sample of the entire African American community, targeted survey zones and hand-delivered invitations were identified. Participating subjects had to be self-described African American, age ≥18. Surveys were completed at a computer terminal assisted by a research coordinator. Four hundred nineteen subjects (weighted sample size of 21 264 [20 888-23 930]). GERD prevalence was 23.7% (95% confidence interval [CI] 23.2-23.9). GERD prevalence was 41.5 % in those with DM versus 20.6 % for those without (P < 0.001). Those with GERD had DM longer but had lower glycohemoglobin levels. The prevalence of ≥2 DM comorbidities was higher in those with GERD (odds ratio [OR] = 2.06; 95% CI 1.71-2.48). In the final model, age >40, DM, increasing body mass index, harmful drinking, and increasing smoking dependence were independently associated with GERD. For DM, there was significant effect modification by gender. In males, the risk was (OR = 4.63; 95% CI 3.96-5.40), while in females, the risk was markedly attenuated (OR = 1.79; 95% CI 1.61-2.00). Among urban African Americans, there is an independent association between DM and GERD that appears to be stronger in men. More information is needed to understand this association. © 2014 International Society for Diseases of the Esophagus.

  7. A blood-based proteomic classifier for the molecular characterization of pulmonary nodules.

    PubMed

    Li, Xiao-jun; Hayward, Clive; Fong, Pui-Yee; Dominguez, Michel; Hunsucker, Stephen W; Lee, Lik Wee; McLean, Matthew; Law, Scott; Butler, Heather; Schirm, Michael; Gingras, Olivier; Lamontagne, Julie; Allard, Rene; Chelsky, Daniel; Price, Nathan D; Lam, Stephen; Massion, Pierre P; Pass, Harvey; Rom, William N; Vachani, Anil; Fang, Kenneth C; Hood, Leroy; Kearney, Paul

    2013-10-16

    Each year, millions of pulmonary nodules are discovered by computed tomography and subsequently biopsied. Because most of these nodules are benign, many patients undergo unnecessary and costly invasive procedures. We present a 13-protein blood-based classifier that differentiates malignant and benign nodules with high confidence, thereby providing a diagnostic tool to avoid invasive biopsy on benign nodules. Using a systems biology strategy, we identified 371 protein candidates and developed a multiple reaction monitoring (MRM) assay for each. The MRM assays were applied in a three-site discovery study (n = 143) on plasma samples from patients with benign and stage IA lung cancer matched for nodule size, age, gender, and clinical site, producing a 13-protein classifier. The classifier was validated on an independent set of plasma samples (n = 104), exhibiting a negative predictive value (NPV) of 90%. Validation performance on samples from a nondiscovery clinical site showed an NPV of 94%, indicating the general effectiveness of the classifier. A pathway analysis demonstrated that the classifier proteins are likely modulated by a few transcription regulators (NF2L2, AHR, MYC, and FOS) that are associated with lung cancer, lung inflammation, and oxidative stress networks. The classifier score was independent of patient nodule size, smoking history, and age, which are risk factors used for clinical management of pulmonary nodules. Thus, this molecular test provides a potential complementary tool to help physicians in lung cancer diagnosis.

  8. Prevalence of Mild Cognitive Impairment and Dementia in Saudi Arabia: A Community-Based Study.

    PubMed

    Alkhunizan, Muath; Alkhenizan, Abdullah; Basudan, Loay

    2018-01-01

    The age of the population in Saudi Arabia is shifting toward elderly, which can lead to an increased risk of mild cognitive impairment (MCI) and dementia. The aim of this study is to determine the prevalence of cognitive impairment (MCI and dementia) among elderly patients in a community-based setting in Riyadh, Saudi Arabia. In this cross-sectional study, we included patients aged 60 years and above who were seen in the Family Medicine Clinics affiliated with King Faisal Specialist Hospital and Research Centre. Patients with delirium, active depression, and patients with a history of severe head trauma in the past 3 months were excluded. Patients were interviewed during their regular visit by a trained physician to collect demographic data and to administer the validated Arabic version of the Montreal Cognitive Assessment (MoCA) test. One hundred seventy-one Saudi patients were recruited based on a calculated sample size for the aim of this study. The mean age of included sample was 67 ± 6 years. The prevalence of cognitive impairment was 45%. The prevalence of MCI was 38.6% and the prevalence of dementia was 6.4%. Age, low level of education, hypertension, and cardiovascular disease were risk factors for cognitive impairment. Prevalence of MCI and dementia in Saudi Arabia using MoCA were in the upper range compared to developed and developing countries. The high rate of risk factors for cognitive impairment in Saudi Arabia is contributing to this finding.

  9. Integrative assessment of multiple pesticides as risk factors for non-Hodgkin's lymphoma among men

    PubMed Central

    De Roos, A J; Zahm, S; Cantor, K; Weisenburger, D; Holmes, F; Burmeister, L; Blair, A

    2003-01-01

    Methods: During the 1980s, the National Cancer Institute conducted three case-control studies of NHL in the midwestern United States. These pooled data were used to examine pesticide exposures in farming as risk factors for NHL in men. The large sample size (n = 3417) allowed analysis of 47 pesticides simultaneously, controlling for potential confounding by other pesticides in the model, and adjusting the estimates based on a prespecified variance to make them more stable. Results: Reported use of several individual pesticides was associated with increased NHL incidence, including organophosphate insecticides coumaphos, diazinon, and fonofos, insecticides chlordane, dieldrin, and copper acetoarsenite, and herbicides atrazine, glyphosate, and sodium chlorate. A subanalysis of these "potentially carcinogenic" pesticides suggested a positive trend of risk with exposure to increasing numbers. Conclusion: Consideration of multiple exposures is important in accurately estimating specific effects and in evaluating realistic exposure scenarios. PMID:12937207

  10. Social Cognition in Individuals at Ultra-High Risk for Psychosis: A Meta-Analysis

    PubMed Central

    van Donkersgoed, R. J. M.; Wunderink, L.; Nieboer, R.; Aleman, A.; Pijnenborg, G. H. M.

    2015-01-01

    Objective Treatment in the ultra-high risk stage for a psychotic episode is critical to the course of symptoms. Markers for the development of psychosis have been studied, to optimize the detection of people at risk of psychosis. One possible marker for the transition to psychosis is social cognition. To estimate effect sizes for social cognition based on a quantitative integration of the published evidence, we conducted a meta-analysis of social cognitive performance in people at ultra high risk (UHR). Methods A literature search (1970-July 2015) was performed in PubMed, PsychINFO, Medline, Embase, and ISI Web of Science, using the search terms ‘social cognition’, ‘theory of mind’, ‘emotion recognition’, ‘attributional style’, ‘social knowledge’, ‘social perception’, ‘empathy’, ‘at risk mental state’, ‘clinical high risk’, ‘psychosis prodrome’, and ‘ultra high risk’. The pooled effect size (Cohen’s D) and the effect sizes for each domain of social cognition were calculated. A random effects model with 95% confidence intervals was used. Results Seventeen studies were included in the analysis. The overall significant effect was of medium magnitude (d = 0.52, 95% Cl = 0.38–0.65). No moderator effects were found for age, gender and sample size. Sub-analyses demonstrated that individuals in the UHR phase show significant moderate deficits in affect recognition and affect discrimination in faces as well as in voices and in verbal Theory of Mind (TOM). Due to an insufficient amount of studies, we did not calculate an effect size for attributional bias and social perception/ knowledge. A majority of studies did not find a correlation between social cognition deficits and transition to psychosis, which may suggest that social cognition in general is not a useful marker for the development of psychosis. However some studies suggest the possible predictive value of verbal TOM and the recognition of specific emotions in faces for the transition into psychosis. More research is needed on these subjects. Conclusion The published literature indicates consistent general impairments in social cognition in people in the UHR phase, but only very specific impairments seem to predict transition to psychosis. PMID:26510175

  11. High-Density Genotyping of Immune Loci in Koreans and Europeans Identifies Eight New Rheumatoid Arthritis Risk Loci

    PubMed Central

    Kim, Kwangwoo; Bang, So-Young; Lee, Hye-Soon; Cho, Soo-Kyung; Choi, Chan-Bum; Sung, Yoon-Kyoung; Kim, Tae-Hwan; Jun, Jae-Bum; Yoo, Dae Hyun; Kang, Young Mo; Kim, Seong-Kyu; Suh, Chang-Hee; Shim, Seung-Cheol; Lee, Shin-Seok; Lee, Jisoo; Chung, Won Tae; Choe, Jung-Yoon; Shin, Hyoung Doo; Lee, Jong-Young; Han, Bok-Ghee; Nath, Swapan K.; Eyre, Steve; Bowes, John; Pappas, Dimitrios A.; Kremer, Joel M.; Gonzalez-Gay, Miguel A; Rodriguez-Rodriguez, Luis; Ärlestig, Lisbeth; Okada, Yukinori; Diogo, Dorothée; Liao, Katherine P.; Karlson, Elizabeth W.; Raychaudhuri, Soumya; Rantapää-Dahlqvist, Solbritt; Martin, Javier; Klareskog, Lars; Padyukov, Leonid; Gregersen, Peter K.; Worthington, Jane; Greenberg, Jeffrey D.; Plenge, Robert M.; Bae, Sang-Cheol

    2015-01-01

    Objective A highly polygenic etiology and high degree of allele-sharing between ancestries have been well-elucidated in genetic studies of rheumatoid arthritis. Recently, the high-density genotyping array Immunochip for immune disease loci identified 14 new rheumatoid arthritis risk loci among individuals of European ancestry. Here, we aimed to identify new rheumatoid arthritis risk loci using Korean-specific Immunochip data. Methods We analyzed Korean rheumatoid arthritis case-control samples using the Immunochip and GWAS array to search for new risk alleles of rheumatoid arthritis with anti-citrullinated peptide antibodies. To increase power, we performed a meta-analysis of Korean data with previously published European Immunochip and GWAS data, for a total sample size of 9,299 Korean and 45,790 European case-control samples. Results We identified 8 new rheumatoid arthritis susceptibility loci (TNFSF4, LBH, EOMES, ETS1–FLI1, COG6, RAD51B, UBASH3A and SYNGR1) that passed a genome-wide significance threshold (p<5×10−8), with evidence for three independent risk alleles at 1q25/TNFSF4. The risk alleles from the 7 new loci except for the TNFSF4 locus (monomorphic in Koreans), together with risk alleles from previously established RA risk loci, exhibited a high correlation of effect sizes between ancestries. Further, we refined the number of SNPs that represent potentially causal variants through a trans-ethnic comparison of densely genotyped SNPs. Conclusion This study demonstrates the advantage of dense-mapping and trans-ancestral analysis for identification of potentially causal SNPs. In addition, our findings support the importance of T cells in the pathogenesis and the fact of frequent overlap of risk loci among diverse autoimmune diseases. PMID:24532676

  12. Fall prevention in high-risk patients.

    PubMed

    Shuey, Kathleen M; Balch, Christine

    2014-12-01

    In the oncology population, disease process and treatment factors place patients at risk for falls. Fall bundles provide a framework for developing comprehensive fall programs in oncology. Small sample size of interventional studies and focus on ambulatory and geriatric populations limit the applicability of results. Additional research is needed. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. "Magnitude-based inference": a statistical review.

    PubMed

    Welsh, Alan H; Knight, Emma J

    2015-04-01

    We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.

  14. Red blood cell distribution width and mortality risk in a community-based prospective cohort: NHANES III

    PubMed Central

    Perlstein, Todd S; Weuve, Jennifer; Pfeffer, Marc A; Beckman, Joshua A

    2011-01-01

    Background The red cell distribution width (RDW), an automated measure of red blood cell size heterogeneity (e.g. anisocytosis) that is largely overlooked, is a newly recognized risk marker in patients with established cardiovascular disease (CVD). It is unknown whether RDW is associated with mortality in the general population, or whether this association is specific to CVD. Methods We examined the association of RDW with all-cause mortality, as well as cardiovascular, cancer, and chronic lower respiratory disease mortality among 15,852 adult participants in The Third National Health and Nutrition Examination Survey (1988–1994), a nationally representative sample of the United States population. Mortality status was obtained by matching to the National Death Index, with follow-up through December 31, 2000. Results Estimated mortality rates increased 5-fold from the lowest to highest quintile of RDW after accounting for age, and 2-fold after multivariable adjustment (each Ptrend < 0.001). A 1- standard deviation increment in RDW (0.98) was associated with a 23% greater risk of all-cause mortality (hazard ratio (HR) 1.23, 95% confidence interval (CI) 1.18–1.28) after multivariable adjustment. RDW was also associated with risk of death due to cardiovascular disease (HR 1.22, 95% CI 1.14–1.31), cancer (HR 1.28, 95% CI 1.21–1.36), and chronic lower respiratory disease (HR 1.32, 95% CI 1.17–1.49). Conclusions Higher RDW was associated with increased mortality risk in this large, community-based sample, an association not specific to CVD. Study of anisocytosis may therefore yield novel pathophysiological insights, and measurement of RDW may contribute to risk assessment. PMID:19307522

  15. Prevalence of risk factors for hepatitis C and associated factors: a population-based study in southern Brazil.

    PubMed

    Kvitko, David Timm; Bastos, Gisele Alsina Nader; Pinto, Maria Eugênia Bresolin

    2013-04-01

    The hepatitis C is a severe public health problem worldwide because its consequences. Studies which aim at determining the prevalence of risk factors are really important to understand the problem. To estimate the prevalence and factors associated with some risk factors for the disease in a community, called Restinga, located in the city of Porto Alegre, RS, Brazil. This paper is based on a population-based cross-sectional study, with systematic sampling and proportional to the size of census tracts in which 3,391 adults answered a standardized questionnaire. The prevalence of blood transfusion among the people who were interviewed was 14.98%, 60.83% of those had it before 1993. A total of 16.16% of the people had a tattoo, 7.23% wore a piercing, 1.09% said they had already injected illicit drugs and 12.39% reported previous hospitalization. Prevalence ratios showed that tattoos were more common among young people, piercings among women and illicit drugs among men. To summarize, the recognition of risk factors for hepatitis C enables proper screening of possible carriers of the hepatitis C virus, thus enabling a reduction in virus shedding. However, being only possible if health services are prepared to deal with hepatitis C virus, through education and public awareness.

  16. Estimation of PAHs dry deposition and BaP toxic equivalency factors (TEFs) study at Urban, Industry Park and rural sampling sites in central Taiwan, Taichung.

    PubMed

    Fang, Guor-Cheng; Chang, Kuan-Foo; Lu, Chungsying; Bai, Hsunling

    2004-05-01

    The concentrations of polycyclic aromatic hydrocarbons (PAHs) in gas phase and particle bound were measured simultaneously at industrial (INDUSTRY), urban (URBAN), and rural areas (RURAL) in Taichung, Taiwan. And the PAH concentrations, size distributions, estimated PAHs dry deposition fluxes and health risk study of PAHs in the ambient air of central Taiwan were discussed in this study. Total PAH concentrations at INDUSTRY, URBAN, and RURAL sampling sites were found to be 1650 +/- 1240, 1220 +/- 520, and 831 +/- 427 ng/m3, respectively. The results indicated that PAH concentrations were higher at INDUSTRY and URBAN sampling sites than the RURAL sampling sites because of the more industrial processes, traffic exhausts and human activities. The estimation dry deposition and size distribution of PAHs were also studied. The results indicated that the estimated dry deposition fluxes of total PAHs were 58.5, 48.8, and 38.6 microg/m2/day at INDUSTRY, URBAN, and RURAL, respectively. The BaP equivalency results indicated that the health risk of gas phase PAHs were higher than the particle phase at three sampling sites of central Taiwan. However, compared with the BaP equivalency results to other studies conducted in factory, this study indicated the health risk of PAHs was acceptable in the ambient air of central Taiwan.

  17. HIV Risks, Testing, and Treatment in the Former Soviet Union: Challenges and Future Directions in Research and Methodology.

    PubMed

    Saadat, Victoria M

    2015-01-01

    The dissolution of the USSR resulted in independence for constituent republics but left them battling an unstable economic environment and healthcare. Increases in injection drug use, prostitution, and migration were all widespread responses to this transition and have contributed to the emergence of an HIV epidemic in the countries of former Soviet Union. Researchers have begun to identify the risks of HIV infection as well as the barriers to HIV testing and treatment in the former Soviet Union. Significant methodological challenges have arisen and need to be addressed. The objective of this review is to determine common threads in HIV research in the former Soviet Union and provide useful recommendations for future research studies. In this systematic review of the literature, Pubmed was searched for English-language studies using the key search terms "HIV", "AIDS", "human immunodeficiency virus", "acquired immune deficiency syndrome", "Central Asia", "Kazakhstan", "Kyrgyzstan", "Uzbekistan", "Tajikistan", "Turkmenistan", "Russia", "Ukraine", "Armenia", "Azerbaijan", and "Georgia". Studies were evaluated against eligibility criteria for inclusion. Thirty-nine studies were identified across the two main topic areas of HIV risk and barriers to testing and treatment, themes subsequently referred to as "risk" and "barriers". Study design was predominantly cross-sectional. The most frequently used sampling methods were peer-to-peer and non-probabilistic sampling. The most frequently reported risks were condom misuse, risky intercourse, and unsafe practices among injection drug users. Common barriers to testing included that testing was inconvenient, and that results would not remain confidential. Frequent barriers to treatment were based on a distrust in the treatment system. The findings of this review reveal methodological limitations that span the existing studies. Small sample size, cross-sectional design, and non-probabilistic sampling methods were frequently reported limitations. Future work is needed to examine barriers to testing and treatment as well as longitudinal studies on HIV risk over time in most-at-risk populations.

  18. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Sample size guidelines for fitting a lognormal probability distribution to censored most probable number data with a Markov chain Monte Carlo method.

    PubMed

    Williams, Michael S; Cao, Yong; Ebel, Eric D

    2013-07-15

    Levels of pathogenic organisms in food and water have steadily declined in many parts of the world. A consequence of this reduction is that the proportion of samples that test positive for the most contaminated product-pathogen pairings has fallen to less than 0.1. While this is unequivocally beneficial to public health, datasets with very few enumerated samples present an analytical challenge because a large proportion of the observations are censored values. One application of particular interest to risk assessors is the fitting of a statistical distribution function to datasets collected at some point in the farm-to-table continuum. The fitted distribution forms an important component of an exposure assessment. A number of studies have compared different fitting methods and proposed lower limits on the proportion of samples where the organisms of interest are identified and enumerated, with the recommended lower limit of enumerated samples being 0.2. This recommendation may not be applicable to food safety risk assessments for a number of reasons, which include the development of new Bayesian fitting methods, the use of highly sensitive screening tests, and the generally larger sample sizes found in surveys of food commodities. This study evaluates the performance of a Markov chain Monte Carlo fitting method when used in conjunction with a screening test and enumeration of positive samples by the Most Probable Number technique. The results suggest that levels of contamination for common product-pathogen pairs, such as Salmonella on poultry carcasses, can be reliably estimated with the proposed fitting method and samples sizes in excess of 500 observations. The results do, however, demonstrate that simple guidelines for this application, such as the proportion of positive samples, cannot be provided. Published by Elsevier B.V.

  20. Risk forewarning model for rice grain Cd pollution based on Bayes theory.

    PubMed

    Wu, Bo; Guo, Shuhai; Zhang, Lingyan; Li, Fengmei

    2018-03-15

    Cadmium (Cd) pollution of rice grain caused by Cd-contaminated soils is a common problem in southwest and central south China. In this study, utilizing the advantages of the Bayes classification statistical method, we established a risk forewarning model for rice grain Cd pollution, and put forward two parameters (the prior probability factor and data variability factor). The sensitivity analysis of the model parameters illustrated that sample size and standard deviation influenced the accuracy and applicable range of the model. The accuracy of the model was improved by the self-renewal of the model through adding the posterior data into the priori data. Furthermore, this method can be used to predict the risk probability of rice grain Cd pollution under similar soil environment, tillage and rice varietal conditions. The Bayes approach thus represents a feasible method for risk forewarning of heavy metals pollution of agricultural products caused by contaminated soils. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. A single baseline ultrasound assessment of fibroid presence and size is strongly predictive of future uterine procedure: 8-year follow-up of randomly sampled premenopausal women aged 35-49 years.

    PubMed

    Baird, D D; Saldana, T M; Shore, D L; Hill, M C; Schectman, J M

    2015-12-01

    How well can a single baseline ultrasound assessment of fibroid burden (presence or absence of fibroids and size of largest, if present) predict future probability of having a major uterine procedure? During an 8-year follow-up period, the risk of having a major uterine procedure was 2% for those without fibroids and increased with fibroid size for those with fibroids, reaching 47% for those with fibroids ≥ 4 cm in diameter at baseline. Uterine fibroids are a leading indication for hysterectomy. However, when fibroids are found, there are few available data to help clinicians advise patients about disease progression. Women who were 35-49 years old were randomly selected from the membership of a large urban health plan; 80% of those determined to be eligible were enrolled and screened with ultrasound for fibroids ≥ 0.5 cm in diameter. African-American and white premenopausal participants who responded to at least one follow-up interview (N = 964, 85% of those eligible) constituted the study cohort. During follow-up (5822 person-years), participants self-reported any major uterine procedure (67% hysterectomies). Life-table analyses and Cox regression (with censoring for menopause) were used to estimate the risk of having a uterine procedure for women with no fibroids, small (<2 cm in diameter), medium (2-3.9 cm), and large fibroids (≥ 4 cm). Differences between African-American and white women, importance of a clinical diagnosis of fibroids prior to study enrollment, and the impact of submucosal fibroids on risk were investigated. There was a greater loss to follow-up for African-Americans than whites (19 versus 11%). For those with follow-up data, 64% had fibroids at baseline, 33% of whom had had a prior diagnosis. Of those with fibroids, 27% had small fibroids (<2 cm in diameter), 46% had medium (largest fibroid 2-3.9 cm in diameter), and 27% had large fibroids (largest ≥ 4 cm in diameter). Twenty-one percent had at least one submucosal fibroid. Major uterine procedures were reported by 115 women during follow-up. The estimated risk of having a procedure in any given year of follow-up for those with fibroids compared with those without fibroids increased markedly with fibroid-size category (from 4-fold, confidence interval (CI) (1.4-11.1) for the small fibroids to 10-fold, CI (4.4-24.8) for the medium fibroids, to 27-fold, CI (11.5-65.2) for the large fibroids). This influence of fibroid size on risk did not differ between African-Americans and whites (P-value for interaction = 0.88). Once fibroid size at enrollment was accounted for, having a prior diagnosis at the time of ultrasound screening was not predictive of having a procedure. Exclusion of women with a submucosal fibroid had little influence on the results. The 8-year risk of a procedure based on lifetable analyses was 2% for women with no fibroids, 8, 23, and 47%, respectively, for women who had small, medium or large fibroids at enrollment. Given the strong association of fibroid size with subsequent risk of a procedure, these findings are unlikely to be due to chance. Despite a large sample size, the number of women having procedures during follow-up was relatively small. Thus, covariates such as BMI, which were not important in our analyses, may have associations that were too small to detect with our sample size. Another limitation is that the medical procedures were self-reported. However, we attempted to retrieve medical records when participants agreed, and 77% of the total procedures reported were verified. Our findings are likely to be generalizable to other African-American and white premenopausal women in their late 30s and 40s, but other ethnic groups have not been studied. Though further studies are needed to confirm and extend the results, our findings provide an initial estimate of disease progression that will be helpful to clinicians and their patients. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  2. Sampling guidelines for oral fluid-based surveys of group-housed animals.

    PubMed

    Rotolo, Marisa L; Sun, Yaxuan; Wang, Chong; Giménez-Lirola, Luis; Baum, David H; Gauger, Phillip C; Harmon, Karen M; Hoogland, Marlin; Main, Rodger; Zimmerman, Jeffrey J

    2017-09-01

    Formulas and software for calculating sample size for surveys based on individual animal samples are readily available. However, sample size formulas are not available for oral fluids and other aggregate samples that are increasingly used in production settings. Therefore, the objective of this study was to develop sampling guidelines for oral fluid-based porcine reproductive and respiratory syndrome virus (PRRSV) surveys in commercial swine farms. Oral fluid samples were collected in 9 weekly samplings from all pens in 3 barns on one production site beginning shortly after placement of weaned pigs. Samples (n=972) were tested by real-time reverse-transcription PCR (RT-rtPCR) and the binary results analyzed using a piecewise exponential survival model for interval-censored, time-to-event data with misclassification. Thereafter, simulation studies were used to study the barn-level probability of PRRSV detection as a function of sample size, sample allocation (simple random sampling vs fixed spatial sampling), assay diagnostic sensitivity and specificity, and pen-level prevalence. These studies provided estimates of the probability of detection by sample size and within-barn prevalence. Detection using fixed spatial sampling was as good as, or better than, simple random sampling. Sampling multiple barns on a site increased the probability of detection with the number of barns sampled. These results are relevant to PRRSV control or elimination projects at the herd, regional, or national levels, but the results are also broadly applicable to contagious pathogens of swine for which oral fluid tests of equivalent performance are available. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Sequential sampling: a novel method in farm animal welfare assessment.

    PubMed

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.

  4. Prevalence of eating disorder risk and body image distortion among National Collegiate Athletic Association Division I varsity equestrian athletes.

    PubMed

    Torres-McGehee, Toni M; Monsma, Eva V; Gay, Jennifer L; Minton, Dawn M; Mady-Foster, Ashley N

    2011-01-01

    Participation in appearance-based sports, particularly at the collegiate level, may place additional pressures on female athletes to be thin, which may increase the likelihood of their resorting to drastic weight control measures, such as disordered eating behaviors. (1) To estimate the prevalence and sources of eating disorder risk classification by academic status (freshman, sophomore, junior, or senior) and riding discipline (English and Western), (2) to examine riding style and academic status variations in body mass index (BMI) and silhouette type, and (3) to examine these variations across eating disorder risk classification type (eg, body image disturbances). Cross-sectional study. Seven universities throughout the United States. A total of 138 participants volunteered (mean age = 19.88 ± 1.29 years). They represented 2 equestrian disciplines English riding (n = 91) and Western riding (n = 47). Participants self-reported menstrual cycle history, height, and weight. We screened for eating disorder risk behaviors with the Eating Attitudes Test and for body disturbance with sex-specific BMI silhouettes. Based on the Eating Attitudes Test, estimated eating disorder prevalence was 42.0% in the total sample, 38.5% among English riders, and 48.9% among Western riders. No BMI or silhouette differences were found across academic status or discipline in disordered eating risk. Overall, participants perceived their body images as significantly larger than their actual physical sizes (self-reported BMI) and wanted to be significantly smaller in both normal clothing and competitive uniforms. Disordered eating risk prevalence among equestrian athletes was similar to that reported in other aesthetic sports and lower than that in nonaesthetic sports. Athletic trainers working with these athletes should be sensitive to these risks and refer athletes as needed to clinicians knowledgeable about disordered eating. Professionals working with this population should avoid making negative comments about physical size and appearance.

  5. Sample Size in Clinical Cardioprotection Trials Using Myocardial Salvage Index, Infarct Size, or Biochemical Markers as Endpoint.

    PubMed

    Engblom, Henrik; Heiberg, Einar; Erlinge, David; Jensen, Svend Eggert; Nordrehaug, Jan Erik; Dubois-Randé, Jean-Luc; Halvorsen, Sigrun; Hoffmann, Pavel; Koul, Sasha; Carlsson, Marcus; Atar, Dan; Arheden, Håkan

    2016-03-09

    Cardiac magnetic resonance (CMR) can quantify myocardial infarct (MI) size and myocardium at risk (MaR), enabling assessment of myocardial salvage index (MSI). We assessed how MSI impacts the number of patients needed to reach statistical power in relation to MI size alone and levels of biochemical markers in clinical cardioprotection trials and how scan day affect sample size. Controls (n=90) from the recent CHILL-MI and MITOCARE trials were included. MI size, MaR, and MSI were assessed from CMR. High-sensitivity troponin T (hsTnT) and creatine kinase isoenzyme MB (CKMB) levels were assessed in CHILL-MI patients (n=50). Utilizing distribution of these variables, 100 000 clinical trials were simulated for calculation of sample size required to reach sufficient power. For a treatment effect of 25% decrease in outcome variables, 50 patients were required in each arm using MSI compared to 93, 98, 120, 141, and 143 for MI size alone, hsTnT (area under the curve [AUC] and peak), and CKMB (AUC and peak) in order to reach a power of 90%. If average CMR scan day between treatment and control arms differed by 1 day, sample size needs to be increased by 54% (77 vs 50) to avoid scan day bias masking a treatment effect of 25%. Sample size in cardioprotection trials can be reduced 46% to 65% without compromising statistical power when using MSI by CMR as an outcome variable instead of MI size alone or biochemical markers. It is essential to ensure lack of bias in scan day between treatment and control arms to avoid compromising statistical power. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  6. Momentary assessment of PTSD symptoms and sexual risk behavior in male OEF/OIF/OND Veterans.

    PubMed

    Black, Anne C; Cooney, Ned L; Justice, Amy C; Fiellin, Lynn E; Pietrzak, Robert H; Lazar, Christina M; Rosen, Marc I

    2016-01-15

    Post-traumatic stress disorder (PTSD) in Veterans is associated with increased sexual risk behaviors, but the nature of this association is not well understood. Typical PTSD measurement deriving a summary estimate of symptom severity over a period of time precludes inferences about symptom variability, and whether momentary changes in symptom severity predict risk behavior. We assessed the feasibility of measuring daily PTSD symptoms, substance use, and high-risk sexual behavior in Veterans using ecological momentary assessment (EMA). Feasibility indicators were survey completion, PTSD symptom variability, and variability in rates of substance use and sexual risk behavior. Nine male Veterans completed web-based questionnaires by cell phone three times per day for 28 days. Median within-day survey completion rates maintained near 90%, and PTSD symptoms showed high within-person variability, ranging up to 59 points on the 80-point scale. Six Veterans reported alcohol or substance use, and substance users reported use of more than one drug. Eight Veterans reported 1 to 28 high-risk sexual events. Heightened PTSD-related negative affect and externalizing behaviors preceded high-risk sexual events. Greater PTSD symptom instability was associated with having multiple sexual partners in the 28-day period. These results are preliminary, given this small sample size, and multiple comparisons, and should be verified with larger Veteran samples. Results support the feasibility and utility of using of EMA to better understand the relationship between PTSD symptoms and sexual risk behavior in Veterans. Specific antecedent-risk behavior patterns provide promise for focused clinical interventions. Published by Elsevier B.V.

  7. Forestry inventory based on multistage sampling with probability proportional to size

    NASA Technical Reports Server (NTRS)

    Lee, D. C. L.; Hernandez, P., Jr.; Shimabukuro, Y. E.

    1983-01-01

    A multistage sampling technique, with probability proportional to size, is developed for a forest volume inventory using remote sensing data. The LANDSAT data, Panchromatic aerial photographs, and field data are collected. Based on age and homogeneity, pine and eucalyptus classes are identified. Selection of tertiary sampling units is made through aerial photographs to minimize field work. The sampling errors for eucalyptus and pine ranged from 8.34 to 21.89 percent and from 7.18 to 8.60 percent, respectively.

  8. Dogmatism as a mediating influence on the perception of risk in consumer choice decisions.

    PubMed

    Durand, R M; Davis, D L; Bearden, W O

    1977-01-01

    The risk perceived by individual consumers when faced with an unfamiliar purchase situation was examined across three groups of females for three product categories. Group membership was determined on the basis of high, medium, and low scores on the Trodahl-Powell dogmatism instrument. Ss were 155 housewives of a medium size midwestern city in the United States surveyed as part of a two-tiered sampling process. The results of a multivariate analysis of variance procedure supported the hypothesis that consumers of a less dogmatic nature perceive lower levels of risk inherent within unfamiliar purchase situations than more dogmatic individuals. The implication for management is that the likelihood of obtaining successful new product introductions may be substantially enhanced through the process of risk reduction across dogmatic consumer segments by use of direct testimonial promotional themes stressing product acceptance in support of more traditional and informative advertising messages. The feasibility of this approach is based upon the premise that the behavior of dogmatic individuals is more frequently affected by pressures from peers and significant others than the behavior of individuals low in dogmatism which is generally based on more factual and relevant information.

  9. Swiss Canine Cancer Registry 1955-2008: Occurrence of the Most Common Tumour Diagnoses and Influence of Age, Breed, Body Size, Sex and Neutering Status on Tumour Development.

    PubMed

    Grüntzig, K; Graf, R; Boo, G; Guscetti, F; Hässig, M; Axhausen, K W; Fabrikant, S; Welle, M; Meier, D; Folkers, G; Pospischil, A

    2016-01-01

    This study is based on the Swiss Canine Cancer Registry, comprising 121,963 diagnostic records of dogs compiled between 1955 and 2008, in which 63,214 (51.83%) animals were diagnosed with tumour lesions through microscopical investigation. Adenoma/adenocarcinoma (n = 12,293, 18.09%) was the most frequent tumour diagnosis. Other common tumour diagnoses were: mast cell tumour (n = 4,415, 6.50%), lymphoma (n = 2,955, 4.35%), melanocytic tumours (n = 2,466, 3.63%), fibroma/fibrosarcoma (n = 2,309, 3.40%), haemangioma/haemangiosarcoma (n = 1,904, 2.80%), squamous cell carcinoma (n = 1,324, 1.95%) and osteoma/osteosarcoma (n = 842, 1.24%). The relative occurrence over time and the most common body locations of those tumour diagnoses are presented. Analyses of the influence of age, breed, body size, sex and neutering status on tumour development were carried out using multiple logistic regression. In certain breeds/breed categories the odds ratios (ORs) for particular tumours were outstandingly high: the boxer had higher ORs for mast cell tumour and haemangioma/haemangiosarcoma, as did the shepherd group for haemangioma/haemangiosarcoma, the schnauzer for squamous cell carcinoma and the rottweiler for osteoma/osteosarcoma. In small dogs, the risk of developing mammary tumours was three times higher than in large dogs. However, small dogs were less likely to be affected by many other tumour types (e.g. tumours of the skeletal system). Examination of the influence of sex and neutering status on tumour prevalence showed that the results depend on the examination method. In all sampling groups the risk for female dogs of developing adenoma/adenocarcinoma was higher than for male dogs. Females had a lower risk of developing haemangioma/haemangiosarcoma and squamous cell carcinoma than males. Neutered animals were at higher risk of developing specific tumours outside the genital organs than intact animals. The sample size allows detailed insight into the influences of age, breed, body size, sex and neutering status on canine tumour development. In many cases, the analysis confirms the findings of other authors. In some cases, the results are unique or contradict other studies, implying that further investigations are necessary. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Evaluating manta ray mucus as an alternative DNA source for population genetics study: underwater-sampling, dry-storage and PCR success.

    PubMed

    Kashiwagi, Tom; Maxwell, Elisabeth A; Marshall, Andrea D; Christensen, Ana B

    2015-01-01

    Sharks and rays are increasingly being identified as high-risk species for extinction, prompting urgent assessments of their local or regional populations. Advanced genetic analyses can contribute relevant information on effective population size and connectivity among populations although acquiring sufficient regional sample sizes can be challenging. DNA is typically amplified from tissue samples which are collected by hand spears with modified biopsy punch tips. This technique is not always popular due mainly to a perception that invasive sampling might harm the rays, change their behaviour, or have a negative impact on tourism. To explore alternative methods, we evaluated the yields and PCR success of DNA template prepared from the manta ray mucus collected underwater and captured and stored on a Whatman FTA™ Elute card. The pilot study demonstrated that mucus can be effectively collected underwater using toothbrush. DNA stored on cards was found to be reliable for PCR-based population genetics studies. We successfully amplified mtDNA ND5, nuclear DNA RAG1, and microsatellite loci for all samples and confirmed sequences and genotypes being those of target species. As the yields of DNA with the tested method were low, further improvements are desirable for assays that may require larger amounts of DNA, such as population genomic studies using emerging next-gen sequencing.

  11. Evaluating manta ray mucus as an alternative DNA source for population genetics study: underwater-sampling, dry-storage and PCR success

    PubMed Central

    Maxwell, Elisabeth A.; Marshall, Andrea D.; Christensen, Ana B.

    2015-01-01

    Sharks and rays are increasingly being identified as high-risk species for extinction, prompting urgent assessments of their local or regional populations. Advanced genetic analyses can contribute relevant information on effective population size and connectivity among populations although acquiring sufficient regional sample sizes can be challenging. DNA is typically amplified from tissue samples which are collected by hand spears with modified biopsy punch tips. This technique is not always popular due mainly to a perception that invasive sampling might harm the rays, change their behaviour, or have a negative impact on tourism. To explore alternative methods, we evaluated the yields and PCR success of DNA template prepared from the manta ray mucus collected underwater and captured and stored on a Whatman FTA™ Elute card. The pilot study demonstrated that mucus can be effectively collected underwater using toothbrush. DNA stored on cards was found to be reliable for PCR-based population genetics studies. We successfully amplified mtDNA ND5, nuclear DNA RAG1, and microsatellite loci for all samples and confirmed sequences and genotypes being those of target species. As the yields of DNA with the tested method were low, further improvements are desirable for assays that may require larger amounts of DNA, such as population genomic studies using emerging next-gen sequencing. PMID:26413431

  12. "PowerUp"!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-Experimental Design Studies

    ERIC Educational Resources Information Center

    Dong, Nianbo; Maynard, Rebecca

    2013-01-01

    This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…

  13. Sampling studies to estimate the HIV prevalence rate in female commercial sex workers.

    PubMed

    Pascom, Ana Roberta Pati; Szwarcwald, Célia Landmann; Barbosa Júnior, Aristides

    2010-01-01

    We investigated sampling methods being used to estimate the HIV prevalence rate among female commercial sex workers. The studies were classified according to the adequacy or not of the sample size to estimate HIV prevalence rate and according to the sampling method (probabilistic or convenience). We identified 75 studies that estimated the HIV prevalence rate among female sex workers. Most of the studies employed convenience samples. The sample size was not adequate to estimate HIV prevalence rate in 35 studies. The use of convenience sample limits statistical inference for the whole group. It was observed that there was an increase in the number of published studies since 2005, as well as in the number of studies that used probabilistic samples. This represents a large advance in the monitoring of risk behavior practices and HIV prevalence rate in this group.

  14. Field size, length, and width distributions based on LACIE ground truth data. [large area crop inventory experiment

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Badhwar, G.

    1980-01-01

    The development of agricultural remote sensing systems requires knowledge of agricultural field size distributions so that the sensors, sampling frames, image interpretation schemes, registration systems, and classification systems can be properly designed. Malila et al. (1976) studied the field size distribution for wheat and all other crops in two Kansas LACIE (Large Area Crop Inventory Experiment) intensive test sites using ground observations of the crops and measurements of their field areas based on current year rectified aerial photomaps. The field area and size distributions reported in the present investigation are derived from a representative subset of a stratified random sample of LACIE sample segments. In contrast to previous work, the obtained results indicate that most field-size distributions are not log-normally distributed. The most common field size observed in this study was 10 acres for most crops studied.

  15. Association of genetic and non-genetic risk factors with the development of prostate cancer in Malaysian men.

    PubMed

    Munretnam, Khamsigan; Alex, Livy; Ramzi, Nurul Hanis; Chahil, Jagdish Kaur; Kavitha, I S; Hashim, Nikman Adli Nor; Lye, Say Hean; Velapasamy, Sharmila; Ler, Lian Wee

    2014-01-01

    There is growing global interest to stratify men into different levels of risk to developing prostate cancer, thus it is important to identify common genetic variants that confer the risk. Although many studies have identified more than a dozen common genetic variants which are highly associated with prostate cancer, none have been done in Malaysian population. To determine the association of such variants in Malaysian men with prostate cancer, we evaluated a panel of 768 SNPs found previously associated with various cancers which also included the prostate specific SNPs in a population based case control study (51 case subjects with prostate cancer and 51 control subjects) in Malaysian men of Malay, Chinese and Indian ethnicity. We identified 21 SNPs significantly associated with prostate cancer. Among these, 12 SNPs were strongly associated with increased risk of prostate cancer while remaining nine SNPs were associated with reduced risk. However, data analysis based on ethnic stratification led to only five SNPs in Malays and 3 SNPs in Chinese which remained significant. This could be due to small sample size in each ethnic group. Significant non-genetic risk factors were also identified for their association with prostate cancer. Our study is the first to investigate the involvement of multiple variants towards susceptibility for PC in Malaysian men using genotyping approach. Identified SNPs and non-genetic risk factors have a significant association with prostate cancer.

  16. Sample size, confidence, and contingency judgement.

    PubMed

    Clément, Mélanie; Mercier, Pierre; Pastò, Luigi

    2002-06-01

    According to statistical models, the acquisition function of contingency judgement is due to confidence increasing with sample size. According to associative models, the function reflects the accumulation of associative strength on which the judgement is based. Which view is right? Thirty university students assessed the relation between a fictitious medication and a symptom of skin discoloration in conditions that varied sample size (4, 6, 8 or 40 trials) and contingency (delta P = .20, .40, .60 or .80). Confidence was also collected. Contingency judgement was lower for smaller samples, while confidence level correlated inversely with sample size. This dissociation between contingency judgement and confidence contradicts the statistical perspective.

  17. External validation of Vascular Study Group of New England risk predictive model of mortality after elective abdominal aorta aneurysm repair in the Vascular Quality Initiative and comparison against established models.

    PubMed

    Eslami, Mohammad H; Rybin, Denis V; Doros, Gheorghe; Siracuse, Jeffrey J; Farber, Alik

    2018-01-01

    The purpose of this study is to externally validate a recently reported Vascular Study Group of New England (VSGNE) risk predictive model of postoperative mortality after elective abdominal aortic aneurysm (AAA) repair and to compare its predictive ability across different patients' risk categories and against the established risk predictive models using the Vascular Quality Initiative (VQI) AAA sample. The VQI AAA database (2010-2015) was queried for patients who underwent elective AAA repair. The VSGNE cases were excluded from the VQI sample. The external validation of a recently published VSGNE AAA risk predictive model, which includes only preoperative variables (age, gender, history of coronary artery disease, chronic obstructive pulmonary disease, cerebrovascular disease, creatinine levels, and aneurysm size) and planned type of repair, was performed using the VQI elective AAA repair sample. The predictive value of the model was assessed via the C-statistic. Hosmer-Lemeshow method was used to assess calibration and goodness of fit. This model was then compared with the Medicare, Vascular Governance Northwest model, and Glasgow Aneurysm Score for predicting mortality in VQI sample. The Vuong test was performed to compare the model fit between the models. Model discrimination was assessed in different risk group VQI quintiles. Data from 4431 cases from the VSGNE sample with the overall mortality rate of 1.4% was used to develop the model. The internally validated VSGNE model showed a very high discriminating ability in predicting mortality (C = 0.822) and good model fit (Hosmer-Lemeshow P = .309) among the VSGNE elective AAA repair sample. External validation on 16,989 VQI cases with an overall 0.9% mortality rate showed very robust predictive ability of mortality (C = 0.802). Vuong tests yielded a significant fit difference favoring the VSGNE over then Medicare model (C = 0.780), Vascular Governance Northwest (0.774), and Glasgow Aneurysm Score (0.639). Across the 5 risk quintiles, the VSGNE model predicted observed mortality significantly with great accuracy. This simple VSGNE AAA risk predictive model showed very high discriminative ability in predicting mortality after elective AAA repair among a large external independent sample of AAA cases performed by a diverse array of physicians nationwide. The risk score based on this simple VSGNE model can reliably stratify patients according to their risk of mortality after elective AAA repair better than other established models. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  18. Can Urbanization, Social and Spatial Disparities Help to Understand the Rise of Cardiometabolic Risk Factors in Bobo-Dioulasso? A Study in a Secondary City of Burkina Faso, West Africa

    PubMed Central

    Zeba, Augustin Nawidimbasba; Yaméogo, Marceline Téné; Tougouma, Somnoma Jean-Baptiste; Kassié, Daouda; Fournet, Florence

    2017-01-01

    Background: Unplanned urbanization plays a key role in chronic disease growth. This population-based cross-sectional study assessed the occurrence of cardiometabolic risk factors in Bobo-Dioulasso and their association with urbanization conditions. Methods: Through spatial sampling, four Bobo-Dioulasso sub-spaces were selected for a population survey to measure the adult health status. Yéguéré, Dogona, Tounouma and Secteur 25 had very different urbanization conditions (position within the city; time of creation and healthcare structure access). The sample size was estimated at 1000 households (250 for each sub-space) in which one adult (35 to 59-year-old) was randomly selected. Finally, 860 adults were surveyed. Anthropometric, socioeconomic and clinical data were collected. Arterial blood pressure was measured and blood samples were collected to assess glycemia. Results: Weight, body mass index and waist circumference (mean values) and serum glycemia (83.4 mg/dL ± 4.62 mmol/L) were significantly higher in Tounouma, Dogona, and Secteur 25 than in Yéguéré; the poorest and most rural-like sub-space (p = 0.001). Overall, 43.2%, 40.5%, 5.3% and 60.9% of participants had overweight, hypertension, hyperglycemia and one or more cardiometabolic risk markers, respectively. Conclusions: Bobo-Dioulasso is unprepared to face this public health issue and urgent responses are needed to reduce the health risks associated with unplanned urbanization. PMID:28375173

  19. Can Urbanization, Social and Spatial Disparities Help to Understand the Rise of Cardiometabolic Risk Factors in Bobo-Dioulasso? A Study in a Secondary City of Burkina Faso, West Africa.

    PubMed

    Zeba, Augustin Nawidimbasba; Yaméogo, Marceline Téné; Tougouma, Somnoma Jean-Baptiste; Kassié, Daouda; Fournet, Florence

    2017-04-04

    Background : Unplanned urbanization plays a key role in chronic disease growth. This population-based cross-sectional study assessed the occurrence of cardiometabolic risk factors in Bobo-Dioulasso and their association with urbanization conditions. Methods : Through spatial sampling, four Bobo-Dioulasso sub-spaces were selected for a population survey to measure the adult health status. Yéguéré, Dogona, Tounouma and Secteur 25 had very different urbanization conditions (position within the city; time of creation and healthcare structure access). The sample size was estimated at 1000 households (250 for each sub-space) in which one adult (35 to 59-year-old) was randomly selected. Finally, 860 adults were surveyed. Anthropometric, socioeconomic and clinical data were collected. Arterial blood pressure was measured and blood samples were collected to assess glycemia. Results : Weight, body mass index and waist circumference (mean values) and serum glycemia (83.4 mg/dL ± 4.62 mmol/L) were significantly higher in Tounouma, Dogona, and Secteur 25 than in Yéguéré; the poorest and most rural-like sub-space ( p = 0.001). Overall, 43.2%, 40.5%, 5.3% and 60.9% of participants had overweight, hypertension, hyperglycemia and one or more cardiometabolic risk markers, respectively. Conclusions : Bobo-Dioulasso is unprepared to face this public health issue and urgent responses are needed to reduce the health risks associated with unplanned urbanization.

  20. Soil factors of ecosystems' disturbance risk reduction under the impact of rocket fuel

    NASA Astrophysics Data System (ADS)

    Krechetov, Pavel; Koroleva, Tatyana; Sharapova, Anna; Chernitsova, Olga

    2016-04-01

    Environmental impacts occur at all stages of space rocket launch. One of the most dangerous consequences of a missile launch is pollution by components of rocket fuels ((unsymmetrical dimethylhydrazine (UDMH)). The areas subjected to falls of the used stages of carrier rockets launched from the Baikonur cosmodrome occupy thousands of square kilometers of different natural landscapes: from dry steppes of Kazakhstan to the taiga of West Siberia and mountains of the Altai-Sayany region. The study aims at assessing the environmental risk of adverse effects of rocket fuel on the soil. Experimental studies have been performed on soil and rock samples with specified parameters of the material composition. The effect of organic matter, acid-base properties, particle size distribution, and mineralogy on the decrease in the concentration of UDMH in equilibrium solutions has been studied. It has been found that the soil factors are arranged in the following series according to the effect on UDMH mobility: acid-base properties > organic matter content >clay fraction mineralogy > particle size distribution. The estimation of the rate of self-purification of contaminated soil is carried out. Experimental study of the behavior of UDMH in soil allowed to define a model for calculating critical loads of UDMH in terrestrial ecosystems.

  1. A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.

    PubMed

    Bord, Séverine; Bioche, Christèle; Druilhet, Pierre

    2018-05-01

    We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Factor XII assay

    MedlinePlus

    ... your provider about the meaning of your specific test results. What Abnormal Results ... vary in size so it may be harder to take a blood sample from one person than another. Other slight risks from having ...

  3. Evaluation of a model of violence risk assessment among forensic psychiatric patients.

    PubMed

    Douglas, Kevin S; Ogloff, James R P; Hart, Stephen D

    2003-10-01

    This study tested the interrater reliability and criterion-related validity of structured violence risk judgments made by using one application of the structured professional judgment model of violence risk assessment, the HCR-20 violence risk assessment scheme, which assesses 20 key risk factors in three domains: historical, clinical, and risk management. The HCR-20 was completed for a sample of 100 forensic psychiatric patients who had been found not guilty by reason of a mental disorder and were subsequently released to the community. Violence in the community was determined from multiple file-based sources. Interrater reliability of structured final risk judgments of low, moderate, or high violence risk made on the basis of the structured professional judgment model was acceptable (weighted kappa=.61). Structured final risk judgments were significantly predictive of postrelease community violence, yielding moderate to large effect sizes. Event history analyses showed that final risk judgments made with the structured professional judgment model added incremental validity to the HCR-20 used in an actuarial (numerical) sense. The findings support the structured professional judgment model of risk assessment as well as the HCR-20 specifically and suggest that clinical judgment, if made within a structured context, can contribute in meaningful ways to the assessment of violence risk.

  4. Simultaneous Determination of Size and Quantification of Gold Nanoparticles by Direct Coupling Thin layer Chromatography with Catalyzed Luminol Chemiluminescence

    PubMed Central

    Yan, Neng; Zhu, Zhenli; He, Dong; Jin, Lanlan; Zheng, Hongtao; Hu, Shenghong

    2016-01-01

    The increasing use of metal-based nanoparticle products has raised concerns in particular for the aquatic environment and thus the quantification of such nanomaterials released from products should be determined to assess their environmental risks. In this study, a simple, rapid and sensitive method for the determination of size and mass concentration of gold nanoparticles (AuNPs) in aqueous suspension was established by direct coupling of thin layer chromatography (TLC) with catalyzed luminol-H2O2 chemiluminescence (CL) detection. For this purpose, a moving stage was constructed to scan the chemiluminescence signal from TLC separated AuNPs. The proposed TLC-CL method allows the quantification of differently sized AuNPs (13 nm, 41 nm and 100 nm) contained in a mixture. Various experimental parameters affecting the characterization of AuNPs, such as the concentration of H2O2, the concentration and pH of the luminol solution, and the size of the spectrometer aperture were investigated. Under optimal conditions, the detection limits for AuNP size fractions of 13 nm, 41 nm and 100 nm were 38.4 μg L−1, 35.9 μg L−1 and 39.6 μg L−1, with repeatabilities (RSD, n = 7) of 7.3%, 6.9% and 8.1% respectively for 10 mg L−1 samples. The proposed method was successfully applied to the characterization of AuNP size and concentration in aqueous test samples. PMID:27080702

  5. The role of the RTEL1 rs2297440 polymorphism in the risk of glioma development: a meta-analysis.

    PubMed

    Zhang, Cuiping; Lu, Yu; Zhang, Xiaolian; Yang, Dongmei; Shang, Shuxin; Liu, Denghe; Jiang, Kongmei; Huang, Weiqiang

    2016-07-01

    The regulator of the telomere elongation helicase1 (RTEL1) gene plays a crucial role in the DNA double-stand break-repair pathway by maintaining genomic stability. Recent epidemiological studies showed that the rs2297440 polymorphism in the RTEL1 gene was a potential risk locus for glioma development, but the results were inconclusive. To clarify the association between this polymorphism and the risk of glioma, we performed a comprehensive meta-analysis. The PubMed, EMBASE, Web of Science, and China National Knowledge Infrastructure databases were systematically searched to identify all relevant published studies up to 30 August 2015. Four eligible studies were finally included. The pooled results indicated that the RTEL1 rs2297440 polymorphism moderately increased the risk of glioma in all genetic models. A comparison of the dominant model CT + CC versus TT (OR 1.40; 95 % CI 1.24-1.60; p < 0.001) indicated that having the C allele conferred a 40 % increased risk of developing glioma. In a subgroup analysis based on geographic location (Europe, Asia, and America), there was an association between the rs2297440 polymorphism and the risk of glioma in all three areas. The results of the subgroup analysis based on source of control indicated an elevated risk of glioma in population-based control studies. This meta-analysis demonstrates that the RTEL1 rs2297440 polymorphism plays a moderate, but significant role in the risk of glioma. Further studies with larger sample sizes are necessary to confirm this finding.

  6. Planning Risk-Based SQC Schedules for Bracketed Operation of Continuous Production Analyzers.

    PubMed

    Westgard, James O; Bayat, Hassan; Westgard, Sten A

    2018-02-01

    To minimize patient risk, "bracketed" statistical quality control (SQC) is recommended in the new CLSI guidelines for SQC (C24-Ed4). Bracketed SQC requires that a QC event both precedes and follows (brackets) a group of patient samples. In optimizing a QC schedule, the frequency of QC or run size becomes an important planning consideration to maintain quality and also facilitate responsive reporting of results from continuous operation of high production analytic systems. Different plans for optimizing a bracketed SQC schedule were investigated on the basis of Parvin's model for patient risk and CLSI C24-Ed4's recommendations for establishing QC schedules. A Sigma-metric run size nomogram was used to evaluate different QC schedules for processes of different sigma performance. For high Sigma performance, an effective SQC approach is to employ a multistage QC procedure utilizing a "startup" design at the beginning of production and a "monitor" design periodically throughout production. Example QC schedules are illustrated for applications with measurement procedures having 6-σ, 5-σ, and 4-σ performance. Continuous production analyzers that demonstrate high σ performance can be effectively controlled with multistage SQC designs that employ a startup QC event followed by periodic monitoring or bracketing QC events. Such designs can be optimized to minimize the risk of harm to patients. © 2017 American Association for Clinical Chemistry.

  7. Low seroprevalence of human Lyme disease near a focus of high entomologic risk.

    PubMed

    Rand, P W; Lacombe, E H; Smith, R P; Gensheimer, K; Dennis, D T

    1996-08-01

    To investigate a low rate of reported human Lyme disease adjacent to an area where the vector tick had become well established, we performed human and canine serosurveys and gathered data on environmental factors related to the risk of transmission. In March 1993, we obtained serum samples and conducted questionnaires that included information on outdoor activities, lot size, and frequency of deer sightings from 272 individuals living within a 5-km strip extending 12 km inland from a study site in south coastal Maine where collections revealed an abundant population of deer ticks. Serologic analysis was done using a flagellin-based enzyme-linked immunosorbent assay (ELISA) followed by Western immunoblot of positive and equivocal samples. Sera from 71 unvaccinated dogs within the study area were also analyzed for anti-Borrelia antibodies by ELISA. Human seropositivity was limited to two individuals living within 1.2 km of the coast. The frequency of daily deer sightings decreased sharply outside this area. Canine seropositivity, 100% within the first 0.8 km, decreased to 2% beyond 1.5 km. Canine serology appears to correlate with the entomologic indicators of the risk of Lyme disease transmission. Possible explanations for the low human seroprevalence are offered.

  8. Freezing of gait and fall detection in Parkinson's disease using wearable sensors: a systematic review.

    PubMed

    Silva de Lima, Ana Lígia; Evers, Luc J W; Hahn, Tim; Bataille, Lauren; Hamilton, Jamie L; Little, Max A; Okuma, Yasuyuki; Bloem, Bastiaan R; Faber, Marjan J

    2017-08-01

    Despite the large number of studies that have investigated the use of wearable sensors to detect gait disturbances such as Freezing of gait (FOG) and falls, there is little consensus regarding appropriate methodologies for how to optimally apply such devices. Here, an overview of the use of wearable systems to assess FOG and falls in Parkinson's disease (PD) and validation performance is presented. A systematic search in the PubMed and Web of Science databases was performed using a group of concept key words. The final search was performed in January 2017, and articles were selected based upon a set of eligibility criteria. In total, 27 articles were selected. Of those, 23 related to FOG and 4 to falls. FOG studies were performed in either laboratory or home settings, with sample sizes ranging from 1 PD up to 48 PD presenting Hoehn and Yahr stage from 2 to 4. The shin was the most common sensor location and accelerometer was the most frequently used sensor type. Validity measures ranged from 73-100% for sensitivity and 67-100% for specificity. Falls and fall risk studies were all home-based, including samples sizes of 1 PD up to 107 PD, mostly using one sensor containing accelerometers, worn at various body locations. Despite the promising validation initiatives reported in these studies, they were all performed in relatively small sample sizes, and there was a significant variability in outcomes measured and results reported. Given these limitations, the validation of sensor-derived assessments of PD features would benefit from more focused research efforts, increased collaboration among researchers, aligning data collection protocols, and sharing data sets.

  9. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  10. Accounting for twin births in sample size calculations for randomised trials.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J

    2018-05-04

    Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.

  11. Small numbers, disclosure risk, security, and reliability issues in Web-based data query systems.

    PubMed

    Rudolph, Barbara A; Shah, Gulzar H; Love, Denise

    2006-01-01

    This article describes the process for developing consensus guidelines and tools for releasing public health data via the Web and highlights approaches leading agencies have taken to balance disclosure risk with public dissemination of reliable health statistics. An agency's choice of statistical methods for improving the reliability of released data for Web-based query systems is based upon a number of factors, including query system design (dynamic analysis vs preaggregated data and tables), population size, cell size, data use, and how data will be supplied to users. The article also describes those efforts that are necessary to reduce the risk of disclosure of an individual's protected health information.

  12. Detecting spatial structures in throughfall data: the effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-04-01

    In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.

  13. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.

  14. The influence of occupant anthropometry and seat position on ejection risk in a rollover.

    PubMed

    Atkinson, Theresa; Fras, Andrew; Telehowski, Paul

    2010-08-01

    During rollover crashes, ejection increases an occupant's risk of severe to fatal injury as compared to risks for those retained in the vehicle. The current study examined whether occupant anthropometry might influence ejection risk. Factors such as restraint use/disuse, seating position, vehicle type, and roll direction were also considered in the analysis. The current study examined occupant ejections in 10 years of National Automotive Sampling System (NASS) single-event rollovers of passenger vehicles and light trucks. Statistical analysis of unweighted and weighted ejection data was carried out. No statistically significant differences in ejection rates were found based on occupant height, age, or body mass index. Drivers were ejected significantly more frequently than other occupants: 62 percent of unrestrained drivers were ejected vs. 51 percent unrestrained right front occupants. Second row unrestrained occupants were ejected at rates similar to right front-seated occupants. There were no significant differences in ejection rates for near- vs. far-side occupants. These data suggest that assessment of ejection prevention systems using either a 50th or 5th percentile adult anthropomorphic test dummy (ATD) might provide a reasonable measure of system function for a broad range of occupants. They also support the development of ejection mitigation technologies that extend beyond the first row to protect occupants in rear seat positions. Future studies should consider potential interaction effects (i.e., occupant size and vehicle dimensions) and the influence of occupant size on ejection risk in non-single-event rollovers.

  15. Relation of cardiac troponin I and microvascular obstruction following ST-elevation myocardial infarction.

    PubMed

    Hallén, Jonas; Jensen, Jesper K; Buser, Peter; Jaffe, Allan S; Atar, Dan

    2011-03-01

    Presence of microvascular obstruction (MVO) following primary percutaneous coronary intervention (pPCI) for ST-elevation myocardial infarction (STEMI) confers higher risk of left-ventricular remodelling and dysfunction. Measurement of cardiac troponin I (cTnI) after STEMI reflects the extent of myocardial destruction. We aimed to explore whether cTnI values were associated with presence of MVO independently of infarct size in STEMI patients receiving pPCI. 175 patients with STEMI were included. cTnI was sampled at 24 and 48 h. MVO and infarct size was determined by delayed enhancement with cardiac magnetic resonance at five to seven days post index event. The presence of MVO following STEMI was associated with larger infarct size and higher values of cTnI at 24 and 48 h. For any given infarct size or cTnI value, there was a greater risk of MVO development in non-anterior infarctions. cTnI was strongly associated with MVO in both anterior and non-anterior infarctions (P < 0.01) after adjustment for covariates (including infarct size); and was reasonably effective in predicting MVO in individual patients (area-under-the-curve ≥0.81). Presence of MVO is reflected in levels of cTnI sampled at an early time-point following STEMI and this association persists after adjustment for infarct size.

  16. An imbalance in cluster sizes does not lead to notable loss of power in cross-sectional, stepped-wedge cluster randomised trials with a continuous outcome.

    PubMed

    Kristunas, Caroline A; Smith, Karen L; Gray, Laura J

    2017-03-07

    The current methodology for sample size calculations for stepped-wedge cluster randomised trials (SW-CRTs) is based on the assumption of equal cluster sizes. However, as is often the case in cluster randomised trials (CRTs), the clusters in SW-CRTs are likely to vary in size, which in other designs of CRT leads to a reduction in power. The effect of an imbalance in cluster size on the power of SW-CRTs has not previously been reported, nor what an appropriate adjustment to the sample size calculation should be to allow for any imbalance. We aimed to assess the impact of an imbalance in cluster size on the power of a cross-sectional SW-CRT and recommend a method for calculating the sample size of a SW-CRT when there is an imbalance in cluster size. The effect of varying degrees of imbalance in cluster size on the power of SW-CRTs was investigated using simulations. The sample size was calculated using both the standard method and two proposed adjusted design effects (DEs), based on those suggested for CRTs with unequal cluster sizes. The data were analysed using generalised estimating equations with an exchangeable correlation matrix and robust standard errors. An imbalance in cluster size was not found to have a notable effect on the power of SW-CRTs. The two proposed adjusted DEs resulted in trials that were generally considerably over-powered. We recommend that the standard method of sample size calculation for SW-CRTs be used, provided that the assumptions of the method hold. However, it would be beneficial to investigate, through simulation, what effect the maximum likely amount of inequality in cluster sizes would be on the power of the trial and whether any inflation of the sample size would be required.

  17. Estimation of sample size and testing power (part 5).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  18. Divorce and Death: A Meta-Analysis and Research Agenda for Clinical, Social, and Health Psychology.

    PubMed

    Sbarra, David A; Law, Rita W; Portley, Robert M

    2011-09-01

    Divorce is a relatively common stressful life event that is purported to increase risk for all-cause mortality. One problem in the literature on divorce and health is that it is fragmented and spread across many disciplines; most prospective studies of mortality are based in epidemiology and sociology, whereas most mechanistic studies are based in psychology. This review integrates research on divorce and death via meta-analysis and outlines a research agenda for better understanding the potential mechanisms linking marital dissolution and risk for all-cause mortality. Random effects meta-analysis with a sample of 32 prospective studies (involving more than 6.5 million people, 160,000 deaths, and over 755,000 divorces in 11 different countries) revealed a significant increase in risk for early death among separated/divorced adults in comparison to their married counterparts. Men and younger adults evidenced significantly greater risk for early death following marital separation/divorce than did women and older adults. Quantification of the overall effect size linking marital separation/divorce to risk for early death reveals a number of important research questions, and this article discusses what remains to be learned about four plausible mechanisms of action: social selection, resource disruptions, changes in health behaviors, and chronic psychological distress. © Association for Psychological Science 2011.

  19. Risks, Assets, and Negative Health Behaviors among Arkansas' Hispanic Adolescents

    ERIC Educational Resources Information Center

    Fitzpatrick, Kevin M.; Choudary, Wendie; Kearney, Anne; Piko, Bettina F.

    2013-01-01

    This study examined the relationship between risk, assets, and negative health behaviors among a large sample of Hispanic adolescents. Data were collected from over 1,000 Hispanic youth in grades 6, 8, 10, and 12 attending school in a moderate size school district in Northwest Arkansas. Logistic regression models examined the variation in the odds…

  20. Weight loss and related behavior changes among lesbians.

    PubMed

    Fogel, Sarah; Young, Laura; Dietrich, Mary; Blakemore, Dana

    2012-01-01

    Overweight and obesity are known risk factors for several modifiable, if not preventable diseases. Growing evidence suggests that lesbians may have higher rates of obesity than other women. This study was designed to describe weight loss and behavior changes related to food choices and exercise habits among lesbians who participated in a predominantly lesbian, mainstream, commercial weight loss program. Behavioral changes were recorded in exercise, quality of food choices, and number of times dining out. Although there were several limitations based on sample size and heterogeneity, the impact of a lesbian-supportive environment for behavior change was upheld.

  1. Diagnostic test accuracy and prevalence inferences based on joint and sequential testing with finite population sampling.

    PubMed

    Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O

    2004-07-30

    The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.

  2. Glove Changing When Handling Money: Observational and Microbiological Analysis.

    PubMed

    Basch, Corey H; Wahrman, Miryam Z; Shah, Jay; Guerra, Laura A; MacDonald, Zerlina; Marte, Myladys; Basch, Charles E

    2016-04-01

    The purpose of this study was to determine the rate of glove changing by mobile food vendors after monetary transactions, and the presence of bacterial contamination on a sample of dollar bills obtained from 25 food vendors near five hospitals in Manhattan, New York City. During 495 monetary transactions observed there were only seven glove changes performed by the workers. Eleven of 34 food workers wore no gloves at all while handling money and food. Nineteen of 25 one-dollar bills collected (76 %) had 400 to 42,000 total bacterial colony-forming units. Colonies were of varied morphology and size. Of these 19 samples, 13 were selected (based on level of growth), and tested for the presence of coliform bacteria, which was found in 10 of the 13 samples. Effective strategies to monitor and increase glove wearing and changing habits of mobile food vendors are needed to reduce risk of foodborne illness.

  3. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  4. Cluster randomised crossover trials with binary data and unbalanced cluster sizes: application to studies of near-universal interventions in intensive care.

    PubMed

    Forbes, Andrew B; Akram, Muhammad; Pilcher, David; Cooper, Jamie; Bellomo, Rinaldo

    2015-02-01

    Cluster randomised crossover trials have been utilised in recent years in the health and social sciences. Methods for analysis have been proposed; however, for binary outcomes, these have received little assessment of their appropriateness. In addition, methods for determination of sample size are currently limited to balanced cluster sizes both between clusters and between periods within clusters. This article aims to extend this work to unbalanced situations and to evaluate the properties of a variety of methods for analysis of binary data, with a particular focus on the setting of potential trials of near-universal interventions in intensive care to reduce in-hospital mortality. We derive a formula for sample size estimation for unbalanced cluster sizes, and apply it to the intensive care setting to demonstrate the utility of the cluster crossover design. We conduct a numerical simulation of the design in the intensive care setting and for more general configurations, and we assess the performance of three cluster summary estimators and an individual-data estimator based on binomial-identity-link regression. For settings similar to the intensive care scenario involving large cluster sizes and small intra-cluster correlations, the sample size formulae developed and analysis methods investigated are found to be appropriate, with the unweighted cluster summary method performing well relative to the more optimal but more complex inverse-variance weighted method. More generally, we find that the unweighted and cluster-size-weighted summary methods perform well, with the relative efficiency of each largely determined systematically from the study design parameters. Performance of individual-data regression is adequate with small cluster sizes but becomes inefficient for large, unbalanced cluster sizes. When outcome prevalences are 6% or less and the within-cluster-within-period correlation is 0.05 or larger, all methods display sub-nominal confidence interval coverage, with the less prevalent the outcome the worse the coverage. As with all simulation studies, conclusions are limited to the configurations studied. We confined attention to detecting intervention effects on an absolute risk scale using marginal models and did not explore properties of binary random effects models. Cluster crossover designs with binary outcomes can be analysed using simple cluster summary methods, and sample size in unbalanced cluster size settings can be determined using relatively straightforward formulae. However, caution needs to be applied in situations with low prevalence outcomes and moderate to high intra-cluster correlations. © The Author(s) 2014.

  5. Sample sizes and model comparison metrics for species distribution models

    Treesearch

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  6. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  7. Genetic Polymorphisms of Glutathione-Related Enzymes (GSTM1, GSTT1, and GSTP1) and Schizophrenia Risk: A Meta-Analysis

    PubMed Central

    Kim, Su Kang; Kang, Sang Wook; Chung, Joo-Ho; Park, Hae Jeong; Cho, Kyu Bong; Park, Min-Su

    2015-01-01

    The association between polymorphisms of glutathione-related enzyme (GST) genes and the risk of schizophrenia has been investigated in many published studies. However, their results were inconclusive. Therefore, we performed a meta-analysis to explore the association between the GSTM1, GSTT1, and GSTP1 polymorphisms and the risk of schizophrenia. Twelve case-control studies were included in this meta-analysis. The odds ratio (OR) and 95% confidence interval (95% CI) were used to investigate the strength of the association. Our meta-analysis results revealed that GSTM1, GSTT1, and GSTP1 polymorphisms were not related to risk of schizophrenia (p > 0.05 in each model). Further analyses based on ethnicity, GSTM polymorphism showed weak association with schizophrenia in East Asian population (OR = 1.314, 95% CI = 1.025–1.684, p = 0.031). In conclusion, our meta-analysis indicated the GSTM1 polymorphism may be the only genetic risk factor for schizophrenia in East Asian population. However, more meta-analysis with a larger sample size were needed to provide more precise evidence. PMID:26295386

  8. A feasibility study in adapting Shamos Bickel and Hodges Lehman estimator into T-Method for normalization

    NASA Astrophysics Data System (ADS)

    Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan

    2018-03-01

    T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.

  9. Disease-Concordant Twins Empower Genetic Association Studies.

    PubMed

    Tan, Qihua; Li, Weilong; Vandin, Fabio

    2017-01-01

    Genome-wide association studies with moderate sample sizes are underpowered, especially when testing SNP alleles with low allele counts, a situation that may lead to high frequency of false-positive results and lack of replication in independent studies. Related individuals, such as twin pairs concordant for a disease, should confer increased power in genetic association analysis because of their genetic relatedness. We conducted a computer simulation study to explore the power advantage of the disease-concordant twin design, which uses singletons from disease-concordant twin pairs as cases and ordinary healthy samples as controls. We examined the power gain of the twin-based design for various scenarios (i.e., cases from monozygotic and dizygotic twin pairs concordant for a disease) and compared the power with the ordinary case-control design with cases collected from the unrelated patient population. Simulation was done by assigning various allele frequencies and allelic relative risks for different mode of genetic inheritance. In general, for achieving a power estimate of 80%, the sample sizes needed for dizygotic and monozygotic twin cases were one half and one fourth of the sample size of an ordinary case-control design, with variations depending on genetic mode. Importantly, the enriched power for dizygotic twins also applies to disease-concordant sibling pairs, which largely extends the application of the concordant twin design. Overall, our simulation revealed a high value of disease-concordant twins in genetic association studies and encourages the use of genetically related individuals for highly efficiently identifying both common and rare genetic variants underlying human complex diseases without increasing laboratory cost. © 2016 John Wiley & Sons Ltd/University College London.

  10. Clinical decision making and the expected value of information.

    PubMed

    Willan, Andrew R

    2007-01-01

    The results of the HOPE study, a randomized clinical trial, provide strong evidence that 1) ramipril prevents the composite outcome of cardiovascular death, myocardial infarction or stroke in patients who are at high risk of a cardiovascular event and 2) ramipril is cost-effective at a threshold willingness-to-pay of $10,000 to prevent an event of the composite outcome. In this report the concept of the expected value of information is used to determine if the information provided by the HOPE study is sufficient for decision making in the US and Canada. and results Using the cost-effectiveness data from a clinical trial, or from a meta-analysis of several trials, one can determine, based on the number of future patients that would benefit from the health technology under investigation, the expected value of sample information (EVSI) of a future trial as a function of proposed sample size. If the EVSI exceeds the cost for any particular sample size then the current information is insufficient for decision making and a future trial is indicated. If, on the other hand, there is no sample size for which the EVSI exceeds the cost, then there is sufficient information for decision making and no future trial is required. Using the data from the HOPE study these concepts are applied for various assumptions regarding the fixed and variable cost of a future trial and the number of patients who would benefit from ramipril. Expected value of information methods provide a decision-analytic alternative to the standard likelihood methods for assessing the evidence provided by cost-effectiveness data from randomized clinical trials.

  11. Prevalence of HIV among Aboriginal and Torres Strait Islander Australians: a systematic review and meta-analysis.

    PubMed

    Graham, Simon; O'Connor, Catherine C; Morgan, Stephen; Chamberlain, Catherine; Hocking, Jane

    2017-06-01

    Background Aboriginal and Torres Strait Islanders (Aboriginal) are Australia's first peoples. Between 2006 and 2015, HIV notifications increased among Aboriginal people; however, among non-Aboriginal people, notifications remained relatively stable. This systematic review and meta-analysis aims to examine the prevalence of HIV among Aboriginal people overall and by subgroups. In November 2015, a search of PubMed and Web of Science, grey literature and abstracts from conferences was conducted. A study was included if it reported the number of Aboriginal people tested and those who tested positive for HIV. The following variables were extracted: gender; Aboriginal status; population group (men who have sex with men, people who inject drugs, adults, youth in detention and pregnant females) and geographical location. An assessment of between study heterogeneity (I 2 test) and within study bias (selection, measurement and sample size) was also conducted. Seven studies were included; all were cross-sectional study designs. The overall sample size was 3772 and the prevalence of HIV was 0.1% (I 2 =38.3%, P=0.136). Five studies included convenient samples of people attending Australian Needle and Syringe Program Centres, clinics, hospitals and a youth detention centre, increasing the potential of selection bias. Four studies had a sample size, thus decreasing the ability to report pooled estimates. The prevalence of HIV among Aboriginal people in Australia is low. Community-based programs that include both prevention messages for those at risk of infection and culturally appropriate clinical management and support for Aboriginal people living with HIV are needed to prevent HIV increasing among Aboriginal people.

  12. Influence of Polygenic Risk Scores on the Association Between Infections and Schizophrenia.

    PubMed

    Benros, Michael E; Trabjerg, Betina B; Meier, Sandra; Mattheisen, Manuel; Mortensen, Preben B; Mors, Ole; Børglum, Anders D; Hougaard, David M; Nørgaard-Pedersen, Bent; Nordentoft, Merete; Agerbo, Esben

    2016-10-15

    Several studies have suggested an important role of infections in the etiology of schizophrenia; however, shared genetic liability toward infections and schizophrenia could influence the association. We therefore investigated the possible effect of polygenic risk scores (PRSs) for schizophrenia on the association between infections and the risk of schizophrenia. We conducted a nested case-control study on a Danish population-based sample born after 1981 comprising of 1692 cases diagnosed with schizophrenia between 1994 and 2008 and 1724 matched controls. All individuals were linked utilizing nationwide population-based registers with virtually complete registration of all hospital contacts for infections. PRSs were calculated using discovery effect size estimates weights from an independent meta-analysis (34,600 cases and 45,968 control individuals). A prior hospital contact with infection had occurred in 41% of the individuals with schizophrenia and increased the incidence rate ratio (IRR) of schizophrenia by 1.43 (95% confidence interval [CI] = 1.22-1.67). Adding PRS, which was robustly associated with schizophrenia (by an IRR of 1.46 [95% CI = 1.34-1.60] per standard deviation of the score), did not alter the association with infections and the increased risk of schizophrenia remained (IRR = 1.41; 95% CI = 1.20-1.66). Furthermore, there were no interactions between PRS and infections on the risk of developing schizophrenia (p = .554). Neither did PRS affect the risk of acquiring infections among patients with schizophrenia (odds ratio = 1.00; 95% CI = 0.89-1.12) nor among controls (odds ratio = 1.09; 95% CI: 0.96-1.24). PRS and a history of infections have independent effects on the risk for schizophrenia, and the common genetic risk measured by PRS did not account for the association with infection in this sample. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  13. The Precision Efficacy Analysis for Regression Sample Size Method.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.

    The general purpose of this study was to examine the efficiency of the Precision Efficacy Analysis for Regression (PEAR) method for choosing appropriate sample sizes in regression studies used for precision. The PEAR method, which is based on the algebraic manipulation of an accepted cross-validity formula, essentially uses an effect size to…

  14. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  15. “Magnitude-based Inference”: A Statistical Review

    PubMed Central

    Welsh, Alan H.; Knight, Emma J.

    2015-01-01

    ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387

  16. Determining the sample size required to establish whether a medical device is non-inferior to an external benchmark.

    PubMed

    Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W

    2017-08-28

    The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. Effects of sources of variability on sample sizes required for RCTs, applied to trials of lipid-altering therapies on carotid artery intima-media thickness.

    PubMed

    Gould, A Lawrence; Koglin, Joerg; Bain, Raymond P; Pinto, Cathy-Anne; Mitchel, Yale B; Pasternak, Richard C; Sapre, Aditi

    2009-08-01

    Studies measuring progression of carotid artery intima-media thickness (cIMT) have been used to estimate the effect of lipid-modifying therapies cardiovascular event risk. The likelihood that future cIMT clinical trials will detect a true treatment effect is estimated by leveraging results from prior studies. The present analyses assess the impact of between- and within-study variability based on currently published data from prior clinical studies on the likelihood that ongoing or future cIMT trials will detect the true treatment effect of lipid-modifying therapies. Published data from six contemporary cIMT studies (ASAP, ARBITER 2, RADIANCE 1, RADIANCE 2, ENHANCE, and METEOR) including data from a total of 3563 patients were examined. Bayesian and frequentist methods were used to assess the impact of between study variability on the likelihood of detecting true treatment effects on 1-year cIMT progression/regression and to provide a sample size estimate that would specifically compensate for the effect of between-study variability. In addition to the well-described within-study variability, there is considerable between-study variability associated with the measurement of annualized change in cIMT. Accounting for the additional between-study variability decreases the power for existing study designs. In order to account for the added between-study variability, it is likely that future cIMT studies would require a large increase in sample size in order to provide substantial probability (> or =90%) to have 90% power of detecting a true treatment effect.Limitation Analyses are based on study level data. Future meta-analyses incorporating patient-level data would be useful for confirmation. Due to substantial within- and between-study variability in the measure of 1-year change of cIMT, as well as uncertainty about progression rates in contemporary populations, future study designs evaluating the effect of new lipid-modifying therapies on atherosclerotic disease progression are likely to be challenged by large sample sizes in order to demonstrate a true treatment effect.

  18. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    PubMed

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  19. Assessment of sampling stability in ecological applications of discriminant analysis

    USGS Publications Warehouse

    Williams, B.K.; Titus, K.

    1988-01-01

    A simulation study was undertaken to assess the sampling stability of the variable loadings in linear discriminant function analysis. A factorial design was used for the factors of multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. A review of 60 published studies and 142 individual analyses indicated that sample sizes in ecological studies often have met that requirement. However, individual group sample sizes frequently were very unequal, and checks of assumptions usually were not reported. The authors recommend that ecologists obtain group sample sizes that are at least three times as large as the number of variables measured.

  20. Advancing Methods for U.S. Transgender Health Research

    PubMed Central

    Reisner, Sari L.; Deutsch, Madeline B.; Bhasin, Shalender; Bockting, Walter; Brown, George R.; Feldman, Jamie; Garofalo, Rob; Kreukels, Baudewijntje; Radix, Asa; Safer, Joshua D.; Tangpricha, Vin; T’Sjoen, Guy; Goodman, Michael

    2016-01-01

    Purpose of Review To describe methodological challenges, gaps, and opportunities in U.S. transgender health research. Recent Findings Lack of large prospective observational studies and intervention trials, limited data on risks and benefits of gender affirmation (e.g., hormones and surgical interventions), and inconsistent use of definitions across studies hinder evidence-based care for transgender people. Systematic high-quality observational and intervention-testing studies may be carried out using several approaches, including general population-based, health systems-based, clinic-based, venue-based, and hybrid designs. Each of these approaches has its strength and limitations; however, harmonization of research efforts is needed. Ongoing development of evidence-based clinical recommendations will benefit from a series of observational and intervention studies aimed at identification, recruitment, and follow-up of transgender people of different ages, from different racial, ethnic, and socioeconomic backgrounds and with diverse gender identities. Summary Transgender health research faces challenges that include standardization of lexicon, agreed-upon population definitions, study design, sampling, measurement, outcome ascertainment, and sample size. Application of existing and new methods is needed to fill existing gaps, increase the scientific rigor and reach of transgender health research, and inform evidence-based prevention and care for this underserved population. PMID:26845331

  1. A community trial of the impact of improved sexually transmitted disease treatment on the HIV epidemic in rural Tanzania: 2. Baseline survey results.

    PubMed

    Grosskurth, H; Mosha, F; Todd, J; Senkoro, K; Newell, J; Klokke, A; Changalucha, J; West, B; Mayaud, P; Gavyole, A

    1995-08-01

    To determine baseline HIV prevalence in a trial of improved sexually transmitted disease (STD) treatment, and to investigate risk factors for HIV. To assess comparability of intervention and comparison communities with respect to HIV/STD prevalence and risk factors. To assess adequacy of sample size. Twelve communities in Mwanza Region, Tanzania: one matched pair of roadside communities, four pairs of rural communities, and one pair of island communities. One community from each pair was randomly allocated to receive the STD intervention following the baseline survey. Approximately 1000 adults aged 15-54 years were randomly sampled from each community. Subjects were interviewed, and HIV and syphilis serology performed. Men with a positive leucocyte esterase dipstick test on urine, or reporting a current STD, were tested for urethral infections. A total of 12,534 adults were enrolled. Baseline HIV prevalences were 7.7% (roadside), 3.8% (rural) and 1.8% (islands). Associations were observed with marital status, injections, education, travel, history of STD and syphilis serology. Prevalence was higher in circumcised men, but not significantly after adjusting for confounders. Intervention and comparison communities were similar in the prevalence of HIV (3.8 versus 4.4%), active syphilis (8.7 versus 8.2%), and most recorded risk factors. Within-pair variability in HIV prevalence was close to the value assumed for sample size calculations. The trial cohort was successfully established. Comparability of intervention and comparison communities at baseline was confirmed for most factors. Matching appears to have achieved a trial of adequate sample size. The apparent lack of a protective effect of male circumcision contrasts with other studies in Africa.

  2. Risk factors for placental malaria and associated adverse pregnancy outcomes in Rufiji, Tanzania: a hospital based cross sectional study.

    PubMed

    Ndeserua, Rabi; Juma, Adinan; Mosha, Dominic; Chilongola, Jaffu

    2015-09-01

    Prevention and treatment of malaria during pregnancy is crucial for reduction of malaria in pregnancy and its adverse outcomes. The spread of parasite resistance to Sulphadoxine-Pyrimethamine (SP) used for Intermittent Preventive Treatment for malaria in pregnancy (IPTp), particularly in East Africa has raised concerns about the usefulness and the reliability of the IPTp regimen. We aimed to assess the effectiveness of two doses of SP in treating and preventing occurrence of adverse pregnancy outcomes. The study was an analytical cross sectional study which enrolled 350 pregnant women from Kibiti Health Centre, South Eastern Tanzania. Structured questionnaires were used to obtain previous obstetrics and medical history of participants and verified by reviewing antenatal clinic cards. Maternal placental blood samples for microscopic examination of malaria parasites were collected after delivery. Data was analyzed for associations between SP dosage, risk for PM and pregnancy outcome. Sample size was estimated based on precision. Prevalence of placental maternal (PM) was 8% among pregnant women (95%CI, 4.4-13.1%). Factors associated with increased risk of PM were primigravidity (P<0.001) and history of fever during pregnancy (P= 0.02). Use of at least 2 doses of SP for IPTp during pregnancy was insignificantly associated with reducing the risk PM (P=0.08), low birth weight (P=0.73) and maternal anemia (P=0.71) but associated significantly with reducing the risk of preterm birth (P<0.001). Two doses of SP for IPTp regime are ineffective in preventing and treating PM and adverse pregnancy outcome. Hence a review to the current IPTp regimen should be considered with possibility of integrating it with other malaria control strategies.

  3. SW-846 Test Method 3511: Organic Compounds in Water by Microextraction

    EPA Pesticide Factsheets

    a procedure for extracting selected volatile and semivolatileorganic compounds from water. The microscale approach minimizes sample size and solventusage, thereby reducing the supply costs, health and safety risks, and waste generated.

  4. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications

    PubMed Central

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965

  5. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  6. Particle-size dependence on metal(loid) distributions in mine wastes: Implications for water contamination and human exposure

    USGS Publications Warehouse

    Kim, C.S.; Wilson, K.M.; Rytuba, J.J.

    2011-01-01

    The mining and processing of metal-bearing ores has resulted in contamination issues where waste materials from abandoned mines remain in piles of untreated and unconsolidated material, posing the potential for waterborne and airborne transport of toxic elements. This study presents a systematic method of particle size separation, mass distribution, and bulk chemical analysis for mine tailings and adjacent background soil samples from the Rand historic mining district, California, in order to assess particle size distribution and related trends in metal(loid) concentration as a function of particle size. Mine tailings produced through stamp milling and leaching processes were found to have both a narrower and finer particle size distribution than background samples, with significant fractions of particles available in a size range (???250 ??m) that could be incidentally ingested. In both tailings and background samples, the majority of trace metal(loid)s display an inverse relationship between concentration and particle size, resulting in higher proportions of As, Cr, Cu, Pb and Zn in finer-sized fractions which are more susceptible to both water- and wind-borne transport as well as ingestion and/or inhalation. Established regulatory screening levels for such elements may, therefore, significantly underestimate potential exposure risk if relying solely on bulk sample concentrations to guide remediation decisions. Correlations in elemental concentration trends (such as between As and Fe) indicate relationships between elements that may be relevant to their chemical speciation. ?? 2011 Elsevier Ltd.

  7. Detection of an Avian Lineage Influenza A(H7N2) Virus in Air and Surface Samples at a New York City Feline Quarantine Facility.

    PubMed

    Blachere, Francoise M; Lindsley, William G; Weber, Angela M; Beezhold, Donald H; Thewlis, Robert E; Mead, Kenneth R; Noti, John D

    2018-05-16

    In December 2016, an outbreak of low pathogenicity avian influenza (LPAI) A(H7N2) occurred in cats at a New York City animal shelter and quickly spread to other shelters in New York and Pennsylvania. The A(H7N2) virus also spread to an attending veterinarian. In response, 500 cats were transferred from these shelters to a temporary quarantine facility for continued monitoring and treatment. The objectives of this study was to assess the occupational risk of A(H7N2) exposure among emergency response workers at the feline quarantine facility. Aerosol and surface samples were collected from inside and outside the isolation zones of the quarantine facility. Samples were screened for A(H7N2) by quantitative RT-PCR and analyzed in embryonated chicken eggs for infectious virus. H7N2 virus was detected by RT-PCR in 28 of 29 aerosol samples collected in the high-risk isolation (hot) zone with 70.9% on particles with aerodynamic diameters >4 μm, 27.7% in 1-4 μm, and 1.4% in <1 μm. Seventeen of 22 surface samples from the high-risk isolation zone were also H7N2-positive with an average M1 copy number of 1.3 x 10 3 . Passage of aerosol and surface samples in eggs confirmed that infectious virus was present throughout the high-risk zones in the quarantine facility. By measuring particle size, distribution, and infectivity, our study suggests that the A(H7N2) virus had the potential to spread by airborne transmission and/or direct contact with viral-laden fomites. These results warranted continued A(H7N2) surveillance and transmission-based precautions during the treatment and care of infected cats. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  8. Enterprise size and risk of hospital treated injuries among manual construction workers in Denmark: a study protocol.

    PubMed

    Pedersen, Betina H; Hannerz, Harald; Christensen, Ulla; Tüchsen, Finn

    2011-04-21

    In most countries throughout the world the construction industry continues to account for a disturbingly high proportion of fatal and nonfatal injuries. Research has shown that large enterprises seem to be most actively working for a safe working environment when compared to small and medium-sized enterprises. Also, statistics from Canada, Italy and South Korea suggest that the risk of injury among construction workers decreases with enterprise size, that is the smaller the enterprise the greater the risk of injury. This trend, however, is neither confirmed by the official statistics from Eurostat valid for EU-15 + Norway nor by a separate Danish study - although these findings might have missed a trend due to severe underreporting. In addition, none of the above mentioned studies controlled for the occupational distribution within the enterprises. A part of the declining injury rates observed in Canada, Italy and South Korea therefore might be explained by an increasing proportion of white-collar employees in large enterprises. To investigate the relation between enterprise size and injury rates in the Danish construction industry. All male construction workers in Denmark aged 20-59 years will be followed yearly through national registers from 1999 to 2006 for first hospital treated injury (ICD-10: S00-T98) and linked to data about employment status, occupation and enterprise size. Enterprise size-classes are based on the Danish business pattern where micro (less than 5 employees), small (5-9 employees) and medium-sized (10-19 employees) enterprises will be compared to large enterprises (at least 20 employees). The analyses will be controlled for age (five-year age groups), calendar year (as categorical variable) and occupation. A multi-level Poisson regression will be used where the enterprises will be treated as the subjects while observations within the enterprises will be treated as correlated repeated measurements. This follow-up study uses register data that include all people in the target population. Sampling bias and response bias are thereby eliminated. A disadvantage of the study is that only injuries requiring hospital treatment are covered.

  9. Enterprise size and risk of hospital treated injuries among manual construction workers in Denmark: a study protocol

    PubMed Central

    2011-01-01

    Background In most countries throughout the world the construction industry continues to account for a disturbingly high proportion of fatal and nonfatal injuries. Research has shown that large enterprises seem to be most actively working for a safe working environment when compared to small and medium-sized enterprises. Also, statistics from Canada, Italy and South Korea suggest that the risk of injury among construction workers decreases with enterprise size, that is the smaller the enterprise the greater the risk of injury. This trend, however, is neither confirmed by the official statistics from Eurostat valid for EU-15 + Norway nor by a separate Danish study - although these findings might have missed a trend due to severe underreporting. In addition, none of the above mentioned studies controlled for the occupational distribution within the enterprises. A part of the declining injury rates observed in Canada, Italy and South Korea therefore might be explained by an increasing proportion of white-collar employees in large enterprises. Objective To investigate the relation between enterprise size and injury rates in the Danish construction industry. Methods/Design All male construction workers in Denmark aged 20-59 years will be followed yearly through national registers from 1999 to 2006 for first hospital treated injury (ICD-10: S00-T98) and linked to data about employment status, occupation and enterprise size. Enterprise size-classes are based on the Danish business pattern where micro (less than 5 employees), small (5-9 employees) and medium-sized (10-19 employees) enterprises will be compared to large enterprises (at least 20 employees). The analyses will be controlled for age (five-year age groups), calendar year (as categorical variable) and occupation. A multi-level Poisson regression will be used where the enterprises will be treated as the subjects while observations within the enterprises will be treated as correlated repeated measurements. Discussion This follow-up study uses register data that include all people in the target population. Sampling bias and response bias are thereby eliminated. A disadvantage of the study is that only injuries requiring hospital treatment are covered. PMID:21510851

  10. Mercury risk to avian piscivores across western United States and Canada

    USGS Publications Warehouse

    Jackson, Allyson K.; Evers, David C.; Eagles-Smith, Collin A.; Ackerman, Joshua T.; Willacker, James J.; Elliott, John E.; Lepak, Jesse M.; Vander Pol, Stacy S.; Bryan, Colleen E.

    2016-01-01

    The widespread distribution of mercury (Hg) threatens wildlife health, particularly piscivorous birds. Western North America is a diverse region that provides critical habitat to many piscivorous bird species, and also has a well-documented history of mercury contamination from legacy mining and atmospheric deposition. The diversity of landscapes in the west limits the distribution of avian piscivore species, complicating broad comparisons across the region. Mercury risk to avian piscivores was evaluated across the western United States and Canada using a suite of avian piscivore species representing a variety of foraging strategies that together occur broadly across the region. Prey fish Hg concentrations were size-adjusted to the preferred size class of the diet for each avian piscivore (Bald Eagle = 36 cm, Osprey = 30 cm, Common and Yellow-billed Loon = 15 cm, Western and Clark's Grebe = 6 cm, and Belted Kingfisher = 5 cm) across each species breeding range. Using a combination of field and lab-based studies on Hg effect in a variety of species, wet weight blood estimates were grouped into five relative risk categories including: background (< 0.5 μg/g), low (0.5–1 μg/g), moderate (1–2 μg/g), high (2–3 μg/g), and extra high (> 3 μg/g). These risk categories were used to estimate potential mercury risk to avian piscivores across the west at a 1 degree-by-1 degree grid cell resolution. Avian piscivores foraging on larger-sized fish generally were at a higher relative risk to Hg. Habitats with a relatively high risk included wetland complexes (e.g., prairie pothole in Saskatchewan), river deltas (e.g., San Francisco Bay, Puget Sound, Columbia River), and arid lands (Great Basin and central Arizona). These results indicate that more intensive avian piscivore sampling is needed across Western North America to generate a more robust assessment of exposure risk.

  11. Effects of cooking on radiocesium in fish from the Savannah River: exposure differences for the public.

    PubMed

    Burger, Joanna; Gaines, Karen F; Boring, C Shane; Snodgrass, J; Stephens, W L; Gochfeld, M

    2004-02-01

    Understanding the factors that contribute to the risk from fish consumption is an important public health concern because of potential adverse effects of radionuclides, organochlorines, other pesticides, and mercury. Risk from consumption is normally computed on the basis of contaminant levels in fish, meal frequency, and meal size, yet cooking practices may also affect risk. This study examines the effect of deep-frying on radiocesium (137Cs) levels and risk to people fishing along the Savannah River. South Carolina and Georgia have issued consumption advisories for the Savannah River, based partly on 137Cs. 137Cs levels were significantly higher in the cooked fish compared to the raw fish on a wet weight basis. Mean 137Cs levels were 0.61 pCi/g (wet weight basis) in raw fish, 0.81 pCi/g in cooked-breaded, and 0.99 pCi/g in cooked-unbreaded fish. Deep-frying with and without breading resulted in a weight loss of 25 and 39%, while 137Cs levels increased by 32 and 62%, respectively. Therefore, the differences were due mainly to weight loss during cooking. However, the data suggest that risk assessments should be based on cooked portion size for contaminant analysis, or the risk from 137Cs in fish will be underestimated. People are likely to estimate the amounts of fish they eat based on a meal size of the cooked portion, while risk assessors determine 137Cs levels in raw fish. A conversion factor of at least two for 137Cs increase during cooking is reasonable and conservative, given the variability in 137Cs levels. The data also suggest that surveys determining consumption should specifically ask about portion size before or after cooking and state which was used in their methods.

  12. Discriminant Analysis of Defective and Non-Defective Field Pea (Pisum sativum L.) into Broad Market Grades Based on Digital Image Features.

    PubMed

    McDonald, Linda S; Panozzo, Joseph F; Salisbury, Phillip A; Ford, Rebecca

    2016-01-01

    Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective.

  13. Discriminant Analysis of Defective and Non-Defective Field Pea (Pisum sativum L.) into Broad Market Grades Based on Digital Image Features

    PubMed Central

    McDonald, Linda S.; Panozzo, Joseph F.; Salisbury, Phillip A.; Ford, Rebecca

    2016-01-01

    Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective. PMID:27176469

  14. Early physiological markers of cardiovascular risk in community based adolescents with a depressive disorder.

    PubMed

    Waloszek, Joanna M; Byrne, Michelle L; Woods, Michael J; Nicholas, Christian L; Bei, Bei; Murray, Greg; Raniti, Monika; Allen, Nicholas B; Trinder, John

    2015-04-01

    Depression is recognised as an independent cardiovascular risk factor in adults. Identifying this relationship early on in life is potentially important for the prevention of cardiovascular disease (CVD). This study investigated whether clinical depression is associated with multiple physiological markers of CVD risk in adolescents from the general community. Participants aged 12-18 years were recruited from the general community and screened for depressive symptoms. Individuals with high and low depressive symptoms were administered a diagnostic interview. Fifty participants, 25 with a current depressive episode and 25 matched healthy controls, subsequently completed cardiovascular assessments. Variables assessed were automatic brachial and continuous beat-to-beat finger arterial blood pressure, heart rate, vascular functioning by pulse amplitude tonometry following reactive hyperaemia and pulse transit time (PTT) at rest. Blood samples were collected to measure cholesterol, glucose and glycohaemoglobin levels and an index of cumulative risk of traditional cardiovascular risk factors was calculated. Depressed adolescents had a significantly lower reactive hyperaemia index and shorter PTT, suggesting deterioration in vascular integrity and structure. Higher fasting glucose and triglyceride levels were also observed in the depressed group, who also had higher cumulative risk scores indicative of increased engagement in unhealthy behaviours and higher probability of advanced atherosclerotic lesions. The sample size and number of males who completed all cardiovascular measures was small. Clinically depressed adolescents had poorer vascular functioning and increased CVD risk compared to controls, highlighting the need for early identification and intervention for the prevention of CVD in depressed youth. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. An empirical comparison of respondent-driven sampling, time location sampling, and snowball sampling for behavioral surveillance in men who have sex with men, Fortaleza, Brazil.

    PubMed

    Kendall, Carl; Kerr, Ligia R F S; Gondim, Rogerio C; Werneck, Guilherme L; Macena, Raimunda Hermelinda Maia; Pontes, Marta Kerr; Johnston, Lisa G; Sabin, Keith; McFarland, Willi

    2008-07-01

    Obtaining samples of populations at risk for HIV challenges surveillance, prevention planning, and evaluation. Methods used include snowball sampling, time location sampling (TLS), and respondent-driven sampling (RDS). Few studies have made side-by-side comparisons to assess their relative advantages. We compared snowball, TLS, and RDS surveys of men who have sex with men (MSM) in Forteleza, Brazil, with a focus on the socio-economic status (SES) and risk behaviors of the samples to each other, to known AIDS cases and to the general population. RDS produced a sample with wider inclusion of lower SES than snowball sampling or TLS-a finding of health significance given the majority of AIDS cases reported among MSM in the state were low SES. RDS also achieved the sample size faster and at lower cost. For reasons of inclusion and cost-efficiency, RDS is the sampling methodology of choice for HIV surveillance of MSM in Fortaleza.

  16. Value of information methods to design a clinical trial in a small population to optimise a health economic utility function.

    PubMed

    Pearce, Michael; Hee, Siew Wan; Madan, Jason; Posch, Martin; Day, Simon; Miller, Frank; Zohar, Sarah; Stallard, Nigel

    2018-02-08

    Most confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population. We considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future. The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored. Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for whom the treatment under investigation is intended. This might be particularly suitable for small populations when there is considerable information about the patient population.

  17. Low‐pathogenic notifiable avian influenza serosurveillance and the risk of infection in poultry – a critical review of the European Union active surveillance programme (2005–2007)

    PubMed Central

    Gonzales, J. L.; Elbers, A. R. W.; Bouma, A.; Koch, G.; De Wit, J. J.; Stegeman, J. A.

    2010-01-01

    Please cite this paper as: Gonzales et al. (2010) Low‐pathogenic notifiable avian influenza serosurveillance and the risk of infection in poultry – a critical review of the European Union active surveillance programme (2005–2007). Influenza and Other Respiratory Viruses 4(2), 91–99. Background  Since 2003, Member States (MS) of the European Union (EU) have implemented serosurveillance programmes for low pathogenic notifiable avian influenza (LPNAI) in poultry. To date, there is the need to evaluate the surveillance activity in order to optimize the programme’s surveillance design. Objectives  To evaluate MS sampling operations [sample size and targeted poultry types (PTs)] and its relation with the probability of detection and to estimate the PTs relative risk (RR) of being infected. Methods  Reported data of the surveillance carried out from 2005 to 2007 were analyzed using: (i) descriptive indicators to characterize both MS sampling operations and its relation with the probability of detection and the LPNAI epidemiological situation, and (ii) multivariable methods to estimate each PTs RR of being infected. Results  Member States sampling a higher sample size than that recommended by the EU had a significantly higher probability of detection. Poultry types with ducks & geese, game‐birds, ratites and “others” had a significant higher RR of being seropositive than chicken categories. The seroprevalence in duck & geese and game‐bird holdings appears to be higher than 5%, which is the EU‐recommended design prevalence (DP), while in chicken and turkey categories the seroprevalence was considerably lower than 5% and with that there is the risk of missing LPNAI seropositive holdings. Conclusion  It is recommended that the European Commission discusses with its MS whether the results of our evaluation calls for refinement of the surveillance characteristics such as sampling frequency, the between‐holding DP and MS sampling operation strategies. PMID:20167049

  18. S187. SEARCHING FOR BRAIN CO-EXPRESSION MODULES THAT CONTRIBUTE DISPROPORTIONATELY TO THE COMMON POLYGENIC RISK FOR SCHIZOPHRENIA

    PubMed Central

    Costas, Javier; Paramo, Mario; Arrojo, Manuel

    2018-01-01

    Abstract Background Genomic research has revealed that schizophrenia is a highly polygenic disease. Recent estimates indicate that at least 71% of genomic segments of 1 Mb include one or more risk loci for schizophrenia (Loh et al., Nature Genet 2015). This extremely high polygenicity represents a challenge to decipher the biological basis of schizophrenia, as it is expected that any set of SNPs with enough size will be associated with the disorder. Among the different gene sets available for study (such as those from Gene Ontology, KEGG pathway, Reactome pathways or protein protein interaction datasets), those based on brain co-expression networks represent putative functional relationships in the relevant tissue. The aim of this work was to identify brain co-expression networks that contribute disproportionately to the common polygenic risk for schizophrenia to get more insight on schizophrenia etiopathology. Methods We analyzed a case -control dataset consisting of 582 schizophrenia patients from Galicia, NW Spain, and 591 ancestrally matched controls, genotyped with the Illumina PsychArray. Using as discovery sample the summary results from the largest GWAS of schizophrenia to date (Psychiatric Genomics Consortium, SCZ2), we generated polygenic risk scores (PRS) in our sample based on SNPs located at genes belonging to brain co-expression modules determined by the CommonMind Consortium (Fromer et al., Nature Neurosci 2016). PRS were generated using the clumping procedure of PLINK, considering several different thresholds to select SNPs from the discovery sample. In order to test if any specific module increased risk to schizophrenia more than expected by their size, we generated up to 10,000 random permutations of the same number of SNPs, matched by frequency, distance to nearest gene, number of SNPs in LD and gene density, using SNPsnap. Results As expected, most modules with enough number of independent SNPs belonging to them showed a significant increase in Nagelkerke’s R2 in our case-control sample after the addition of the module-specific PRS in a logistic regression model. Our permutation strategy revealed that most modules did not show an excess of risk, measured by increase in Nagelkerke’s R2, in comparison to equal number of SNPs with similar characteristics. But one module, M2c from Fromer et al., remained highly significant after multiple tests’ correction. Reactome pathways analysis revealed an over-representation of genes involved in “Neuronal System” and “Axon guidance” among genes from this module. Using the same protocol, we detected that the 84 genes from the neuronal system pathway at this module, representing less than 6% of the genes from the module, explained a higher level of risk than expected. “Voltage-gated Potassium channels” and “Neurexins and neuroligins” are overrepresented among the Neuronal System genes from module M2c. Discussion Here, we show that, in spite of the high polygenicity of schizophrenia, it is possible to identify gene sets contributing disproportionately to total risk, as it was the case for the M2c module from Fromer et al. These authors have previously reported that the M2c module was enriched in GWAS signals, as well as CNVs and rare variants associated with schizophrenia. Therefore, this module shows a disproportionately contribution to schizophrenia risk. Study supported by Grant PI14/01020 from Instituto de Salud Carlos III, Ministry of Health, Spanish Government.

  19. Sampling procedures for throughfall monitoring: A simulation study

    NASA Astrophysics Data System (ADS)

    Zimmermann, Beate; Zimmermann, Alexander; Lark, Richard Murray; Elsenbeer, Helmut

    2010-01-01

    What is the most appropriate sampling scheme to estimate event-based average throughfall? A satisfactory answer to this seemingly simple question has yet to be found, a failure which we attribute to previous efforts' dependence on empirical studies. Here we try to answer this question by simulating stochastic throughfall fields based on parameters for statistical models of large monitoring data sets. We subsequently sampled these fields with different sampling designs and variable sample supports. We evaluated the performance of a particular sampling scheme with respect to the uncertainty of possible estimated means of throughfall volumes. Even for a relative error limit of 20%, an impractically large number of small, funnel-type collectors would be required to estimate mean throughfall, particularly for small events. While stratification of the target area is not superior to simple random sampling, cluster random sampling involves the risk of being less efficient. A larger sample support, e.g., the use of trough-type collectors, considerably reduces the necessary sample sizes and eliminates the sensitivity of the mean to outliers. Since the gain in time associated with the manual handling of troughs versus funnels depends on the local precipitation regime, the employment of automatically recording clusters of long troughs emerges as the most promising sampling scheme. Even so, a relative error of less than 5% appears out of reach for throughfall under heterogeneous canopies. We therefore suspect a considerable uncertainty of input parameters for interception models derived from measured throughfall, in particular, for those requiring data of small throughfall events.

  20. Variations in BMI and prevalence of health risks in diverse racial and ethnic populations.

    PubMed

    Stommel, Manfred; Schoenborn, Charlotte A

    2010-09-01

    When examining health risks associated with the BMI, investigators often rely on the customary BMI thresholds of the 1995 World Health Organization report. However, within-interval variations in morbidity and mortality can be substantial, and the thresholds do not necessarily correspond to identifiable risk increases. Comparing the prevalence of hypertension, diabetes, coronary heart disease (CHD), asthma, and arthritis among non-Hispanic whites, blacks, East Asians and Hispanics, we examine differences in the BMI-health-risk relationships for small BMI increments. The analysis is based on 11 years of data of the National Health Interview Survey (NHIS), with a sample size of 337,375 for the combined 1997-2007 Sample Adult. The analysis uses multivariate logistic regression models, employing a nonparametric approach to modeling the BMI-health-risk relationship, while relying on narrowly defined BMI categories. Rising BMI levels are associated with higher levels of chronic disease burdens in four major racial and ethnic groups, even after adjusting for many socio-demographic characteristics and three important health-related behaviors (smoking, physical activity, alcohol consumption). For all population groups, except East Asians, a modestly higher disease risk was noted for persons with a BMI <20 compared with persons with BMI in the range of 20-21. Using five chronic conditions as risk criteria, a categorization of the BMI into normal weight, overweight, or obesity appears arbitrary. Although the prevalence of disease risks differs among racial and ethnic groups regardless of BMI levels, the evidence presented here does not support the notion that the BMI-health-risk profile of East Asians and others warrants race-specific BMI cutoff points.

  1. Risk Factors for Chronic Disease in Viet Nam: A Review of the Literature

    PubMed Central

    Rao, Chalapati; Nhung, Nguyen Thi Trang; Marks, Geoffrey; Hoa, Nguyen Phuong

    2013-01-01

    Introduction Chronic diseases account for most of the disease burden in low- and middle-income countries, particularly those in Asia. We reviewed literature on chronic disease risk factors in Viet Nam to identify patterns and data gaps. Methods All population-based studies published from 2000 to 2012 that reported chronic disease risk factors were considered. We used standard chronic disease terminology to search PubMed and assessed titles, abstracts, and articles for eligibility for inclusion. We summarized relevant study information in tables listing available studies, risk factors measured, and the prevalence of these risk factors. Results We identified 23 studies conducted before 2010. The most common age range studied was 25 to 64 years. Sample sizes varied, and sample frames were national in 5 studies. A combination of behavioral, physical, and biological risk factors was studied. Being overweight or obese was the most common risk factor studied (n = 14), followed by high blood pressure (n = 11) and tobacco use (n = 10). Tobacco and alcohol use were high among men, and tobacco use may be increasing among Vietnamese women. High blood pressure is common; however, people’s knowledge that they have high blood pressure may be low. A high proportion of diets do not meet international criteria for fruit and vegetable consumption. Prevalence of overweight and obesity is increasing. None of the studies evaluated measured dietary patterns or total caloric intake, and only 1 study measured dietary salt intake. Conclusion Risk factors for chronic diseases are common in Viet Nam; however, more recent and context-specific information is required for planning and monitoring interventions to reduce risk factors and chronic disease in this country. PMID:23306076

  2. On the validity of within-nuclear-family genetic association analysis in samples of extended families.

    PubMed

    Bureau, Alexandre; Duchesne, Thierry

    2015-12-01

    Splitting extended families into their component nuclear families to apply a genetic association method designed for nuclear families is a widespread practice in familial genetic studies. Dependence among genotypes and phenotypes of nuclear families from the same extended family arises because of genetic linkage of the tested marker with a risk variant or because of familial specificity of genetic effects due to gene-environment interaction. This raises concerns about the validity of inference conducted under the assumption of independence of the nuclear families. We indeed prove theoretically that, in a conditional logistic regression analysis applicable to disease cases and their genotyped parents, the naive model-based estimator of the variance of the coefficient estimates underestimates the true variance. However, simulations with realistic effect sizes of risk variants and variation of this effect from family to family reveal that the underestimation is negligible. The simulations also show the greater efficiency of the model-based variance estimator compared to a robust empirical estimator. Our recommendation is therefore, to use the model-based estimator of variance for inference on effects of genetic variants.

  3. An opportunity cost approach to sample size calculation in cost-effectiveness analysis.

    PubMed

    Gafni, A; Walter, S D; Birch, S; Sendi, P

    2008-01-01

    The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers. Copyright (c) 2007 John Wiley & Sons, Ltd.

  4. sGD: software for estimating spatially explicit indices of genetic diversity.

    PubMed

    Shirk, A J; Cushman, S A

    2011-09-01

    Anthropogenic landscape changes have greatly reduced the population size, range and migration rates of many terrestrial species. The small local effective population size of remnant populations favours loss of genetic diversity leading to reduced fitness and adaptive potential, and thus ultimately greater extinction risk. Accurately quantifying genetic diversity is therefore crucial to assessing the viability of small populations. Diversity indices are typically calculated from the multilocus genotypes of all individuals sampled within discretely defined habitat patches or larger regional extents. Importantly, discrete population approaches do not capture the clinal nature of populations genetically isolated by distance or landscape resistance. Here, we introduce spatial Genetic Diversity (sGD), a new spatially explicit tool to estimate genetic diversity based on grouping individuals into potentially overlapping genetic neighbourhoods that match the population structure, whether discrete or clinal. We compared the estimates and patterns of genetic diversity using patch or regional sampling and sGD on both simulated and empirical populations. When the population did not meet the assumptions of an island model, we found that patch and regional sampling generally overestimated local heterozygosity, inbreeding and allelic diversity. Moreover, sGD revealed fine-scale spatial heterogeneity in genetic diversity that was not evident with patch or regional sampling. These advantages should provide a more robust means to evaluate the potential for genetic factors to influence the viability of clinal populations and guide appropriate conservation plans. © 2011 Blackwell Publishing Ltd.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir

    Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less

  6. Risk prediction models for major adverse cardiac event (MACE) following percutaneous coronary intervention (PCI): A review

    NASA Astrophysics Data System (ADS)

    Manan, Norhafizah A.; Abidin, Basir

    2015-02-01

    Five percent of patients who went through Percutaneous Coronary Intervention (PCI) experienced Major Adverse Cardiac Events (MACE) after PCI procedure. Risk prediction of MACE following a PCI procedure therefore is helpful. This work describes a review of such prediction models currently in use. Literature search was done on PubMed and SCOPUS database. Thirty literatures were found but only 4 studies were chosen based on the data used, design, and outcome of the study. Particular emphasis was given and commented on the study design, population, sample size, modeling method, predictors, outcomes, discrimination and calibration of the model. All the models had acceptable discrimination ability (C-statistics >0.7) and good calibration (Hosmer-Lameshow P-value >0.05). Most common model used was multivariate logistic regression and most popular predictor was age.

  7. An abundance of rare functional variants in 202 drug target genes sequenced in 14,002 people

    PubMed Central

    Nelson, Matthew R.; Wegmann, Daniel; Ehm, Margaret G.; Kessner, Darren; St. Jean, Pamela; Verzilli, Claudio; Shen, Judong; Tang, Zhengzheng; Bacanu, Silviu-Alin; Fraser, Dana; Warren, Liling; Aponte, Jennifer; Zawistowski, Matthew; Liu, Xiao; Zhang, Hao; Zhang, Yong; Li, Jun; Li, Yun; Li, Li; Woollard, Peter; Topp, Simon; Hall, Matthew D.; Nangle, Keith; Wang, Jun; Abecasis, Gonçalo; Cardon, Lon R.; Zöllner, Sebastian; Whittaker, John C.; Chissoe, Stephanie L.; Novembre, John; Mooser, Vincent

    2015-01-01

    Rare genetic variants contribute to complex disease risk; however, the abundance of rare variants in human populations remains unknown. We explored this spectrum of variation by sequencing 202 genes encoding drug targets in 14,002 individuals. We find rare variants are abundant (one every 17 bases) and geographically localized, such that even with large sample sizes, rare variant catalogs will be largely incomplete. We used the observed patterns of variation to estimate population growth parameters, the proportion of variants in a given frequency class that are putatively deleterious, and mutation rates for each gene. Overall we conclude that, due to rapid population growth and weak purifying selection, human populations harbor an abundance of rare variants, many of which are deleterious and have relevance to understanding disease risk. PMID:22604722

  8. The choice of sample size: a mixed Bayesian / frequentist approach.

    PubMed

    Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John

    2009-04-01

    Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.

  9. Bayesian analysis of time-series data under case-crossover designs: posterior equivalence and inference.

    PubMed

    Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay

    2013-12-01

    Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.

  10. High prevalence of Legionella in non-passenger merchant vessels.

    PubMed

    Collins, S L; Stevenson, D; Mentasti, M; Shaw, A; Johnson, A; Crossley, L; Willis, C

    2017-03-01

    There is a paucity of information on the risk from potable water in non-passenger merchant vessels (NPMVs) particularly with regard to Legionella and other bacteria. This retrospective study examined water samples from 550 NPMVs docked in eight UK ports. A total of 1027 samples from 412 NPMVs were examined for total aerobic colony counts (ACC), coliforms, Escherichia coli and enterococci; 41% of samples yielded ACC above the action level (>1 × 103 c.f.u./ml) and 4·5% contained actionable levels (>1 c.f.u./100 ml) of faecal indicator bacteria. Eight hundred and three samples from 360 NPMVs were cultured specifically for Legionella and 58% of vessels proved positive for these organisms with 27% of samples showing levels greater than the UK upper action limit of 1 × 103 c.f.u./l. Cabin showers (49%) and hospital shower (45%) were frequently positive. A subset of 106 samples was analysed by quantitative polymerase chain reaction for Legionella and identified a further 11 Legionella-positive NPMVs, returning a negative predictive value of 100%. There was no correlation between NPMV age or size and any microbial parameters (P > 0·05). Legionella pneumophila serogroup 1 was isolated from 46% of NPMVs and sequence-based typing of 17 isolates revealed four sequence types (STs) previously associated with human disease. These data raise significant concerns regarding the management of microbial and Legionella risks on board NPMVs and suggest that better guidance and compliance are required to improve control.

  11. Aeromechanics and Vehicle Configuration Demonstrations. Volume 2: Understanding Vehicle Sizing, Aeromechanics and Configuration Trades, Risks, and Issues for Next-Generations Access to Space Vehicles

    DTIC Science & Technology

    2014-01-01

    and proportional correctors. The weighting function evaluates nearby data samples to determine the utility of each correction style , eliminating the...sparse methods may be of use. As for other multi-fidelity techniques, true cokriging in the style described by geo-statisticians[93] is beyond the...sampling style between sampling points predicted to fall near the contour and sampling points predicted to be farther from the contour but with

  12. Operational Risk Measurement of Chinese Commercial Banks Based on Extreme Value Theory

    NASA Astrophysics Data System (ADS)

    Song, Jiashan; Li, Yong; Ji, Feng; Peng, Cheng

    The financial institutions and supervision institutions have all agreed on strengthening the measurement and management of operational risks. This paper attempts to build a model on the loss of operational risks basing on Peak Over Threshold model, emphasizing on weighted least square, which improved Hill’s estimation method, while discussing the situation of small sample, and fix the sample threshold more objectively basing on the media-published data of primary banks loss on operational risk from 1994 to 2007.

  13. Do conscientious individuals live longer? A quantitative review.

    PubMed

    Kern, Margaret L; Friedman, Howard S

    2008-09-01

    Following up on growing evidence that higher levels of conscientiousness are associated with greater health protection, the authors conducted a meta-analysis of the association between conscientiousness-related traits and longevity. Using a random-effects analysis model, the authors statistically combined 20 independent samples. In addition, the authors used fixed-effects analyses to examine specific facets of conscientiousness and study characteristics as potential moderators of this relationship. Effect sizes were computed for each individual sample as the correlation coefficient r, based on the relationship between conscientiousness and mortality risk (all-cause mortality risk, longevity, or length of survival). Higher levels of conscientiousness were significantly and positively related to longevity (r = .11, 95% confidence interval = .05-.17). Associations were strongest for the achievement (persistent, industrious) and order (organized, disciplined) facets of conscientiousness. Results strongly support the importance of conscientiousness-related traits to health across the life span. Future research and interventions should consider how individual differences in conscientiousness may cause and be shaped by health-relevant biopsychosocial events across many years. PsycINFO Database Record (c) 2008 APA, all rights reserved.

  14. Risk profiling of cattle farms as a potential tool in risk-based surveillance for Mycobacterium bovis infection among cattle in tuberculosis-free areas.

    PubMed

    Ribeiro-Lima, Joao; Schwabenlander, Stacey; Oakes, Michael; Thompson, Beth; Wells, Scott J

    2016-06-15

    OBJECTIVE To develop a cattle herd risk-profiling system that could potentially inform risk-based surveillance strategies for Mycobacterium bovis infection in cattle and provide information that could be used to help direct resource allocation by a state agency for this purpose. DESIGN Cross-sectional study. SAMPLE Records for any size movement (importation) of cattle into Minnesota from other US states during 2009 (n = 7,185) and 2011 (8,107). PROCEDURES Data from certificates of veterinary inspection were entered into a spreadsheet. Movement data were summarized at premises and county levels, and for each level, the distribution of cattle moved and number of movements were evaluated. Risk profiling (assessment and categorization of risk for disease introduction) for each import movement was performed on the basis of known risk factors. Latent class analysis was used to assign movements to risk classifications with adjustment on the basis of expert opinions from personnel knowledgeable about bovine tuberculosis; these data were used to classify premises as very high, high, medium, or low risk for disease introduction. RESULTS In each year, approximately 1,500 premises imported cattle, typically beef and feeder types, with the peak of import movements during the fall season. The risk model identified 4 risk classes for cattle movements. Approximately 500 of the estimated 27,406 (2%) cattle premises in Minnesota were in the very high or high risk groups for either year; greatest density of these premises was in the southeast and southwest regions of the state. CONCLUSIONS AND CLINICAL RELEVANCE A risk-profiling approach was developed that can be applied in targeted surveillance efforts for bovine tuberculosis, particularly in disease-free areas.

  15. Personalizing lung cancer risk prediction and imaging follow-up recommendations using the National Lung Screening Trial dataset.

    PubMed

    Hostetter, Jason M; Morrison, James J; Morris, Michael; Jeudy, Jean; Wang, Kenneth C; Siegel, Eliot

    2017-11-01

    To demonstrate a data-driven method for personalizing lung cancer risk prediction using a large clinical dataset. An algorithm was used to categorize nodules found in the first screening year of the National Lung Screening Trial as malignant or nonmalignant. Risk of malignancy for nodules was calculated based on size criteria according to the Fleischner Society recommendations from 2005, along with the additional discriminators of pack-years smoking history, sex, and nodule location. Imaging follow-up recommendations were assigned according to Fleischner size category malignancy risk. Nodule size correlated with malignancy risk as predicted by the Fleischner Society recommendations. With the additional discriminators of smoking history, sex, and nodule location, significant risk stratification was observed. For example, men with ≥60 pack-years smoking history and upper lobe nodules measuring >4 and ≤6 mm demonstrated significantly increased risk of malignancy at 12.4% compared to the mean of 3.81% for similarly sized nodules (P < .0001). Based on personalized malignancy risk, 54% of nodules >4 and ≤6 mm were reclassified to longer-term follow-up than recommended by Fleischner. Twenty-seven percent of nodules ≤4 mm were reclassified to shorter-term follow-up. Using available clinical datasets such as the National Lung Screening Trial in conjunction with locally collected datasets can help clinicians provide more personalized malignancy risk predictions and follow-up recommendations. By incorporating 3 demographic data points, the risk of lung nodule malignancy within the Fleischner categories can be considerably stratified and more personalized follow-up recommendations can be made. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  16. High-density genotyping of immune loci in Koreans and Europeans identifies eight new rheumatoid arthritis risk loci.

    PubMed

    Kim, Kwangwoo; Bang, So-Young; Lee, Hye-Soon; Cho, Soo-Kyung; Choi, Chan-Bum; Sung, Yoon-Kyoung; Kim, Tae-Hwan; Jun, Jae-Bum; Yoo, Dae Hyun; Kang, Young Mo; Kim, Seong-Kyu; Suh, Chang-Hee; Shim, Seung-Cheol; Lee, Shin-Seok; Lee, Jisoo; Chung, Won Tae; Choe, Jung-Yoon; Shin, Hyoung Doo; Lee, Jong-Young; Han, Bok-Ghee; Nath, Swapan K; Eyre, Steve; Bowes, John; Pappas, Dimitrios A; Kremer, Joel M; Gonzalez-Gay, Miguel A; Rodriguez-Rodriguez, Luis; Ärlestig, Lisbeth; Okada, Yukinori; Diogo, Dorothée; Liao, Katherine P; Karlson, Elizabeth W; Raychaudhuri, Soumya; Rantapää-Dahlqvist, Solbritt; Martin, Javier; Klareskog, Lars; Padyukov, Leonid; Gregersen, Peter K; Worthington, Jane; Greenberg, Jeffrey D; Plenge, Robert M; Bae, Sang-Cheol

    2015-03-01

    A highly polygenic aetiology and high degree of allele-sharing between ancestries have been well elucidated in genetic studies of rheumatoid arthritis. Recently, the high-density genotyping array Immunochip for immune disease loci identified 14 new rheumatoid arthritis risk loci among individuals of European ancestry. Here, we aimed to identify new rheumatoid arthritis risk loci using Korean-specific Immunochip data. We analysed Korean rheumatoid arthritis case-control samples using the Immunochip and genome-wide association studies (GWAS) array to search for new risk alleles of rheumatoid arthritis with anticitrullinated peptide antibodies. To increase power, we performed a meta-analysis of Korean data with previously published European Immunochip and GWAS data for a total sample size of 9299 Korean and 45,790 European case-control samples. We identified eight new rheumatoid arthritis susceptibility loci (TNFSF4, LBH, EOMES, ETS1-FLI1, COG6, RAD51B, UBASH3A and SYNGR1) that passed a genome-wide significance threshold (p<5×10(-8)), with evidence for three independent risk alleles at 1q25/TNFSF4. The risk alleles from the seven new loci except for the TNFSF4 locus (monomorphic in Koreans), together with risk alleles from previously established RA risk loci, exhibited a high correlation of effect sizes between ancestries. Further, we refined the number of single nucleotide polymorphisms (SNPs) that represent potentially causal variants through a trans-ethnic comparison of densely genotyped SNPs. This study demonstrates the advantage of dense-mapping and trans-ancestral analysis for identification of potentially causal SNPs. In addition, our findings support the importance of T cells in the pathogenesis and the fact of frequent overlap of risk loci among diverse autoimmune diseases. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  17. The effectiveness of problem-based learning on development of nursing students' critical thinking: a systematic review and meta-analysis.

    PubMed

    Kong, Ling-Na; Qin, Bo; Zhou, Ying-qing; Mou, Shao-yu; Gao, Hui-Ming

    2014-03-01

    The objective of this systematic review and meta-analysis was to estimate the effectiveness of problem-based learning in developing nursing students' critical thinking. Searches of PubMed, EMBASE, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Proquest, Cochrane Central Register of Controlled Trials (CENTRAL) and China National Knowledge Infrastructure (CNKI) were undertaken to identify randomized controlled trails from 1965 to December 2012, comparing problem-based learning with traditional lectures on the effectiveness of development of nursing students' critical thinking, with no language limitation. The mesh-terms or key words used in the search were problem-based learning, thinking, critical thinking, nursing, nursing education, nurse education, nurse students, nursing students and pupil nurse. Two reviewers independently assessed eligibility and extracted data. Quality assessment was conducted independently by two reviewers using the Cochrane Collaboration's Risk of Bias Tool. We analyzed critical thinking scores (continuous outcomes) using a standardized mean difference (SMD) or weighted mean difference (WMD) with a 95% confidence intervals (CIs). Heterogeneity was assessed using the Cochran's Q statistic and I(2) statistic. Publication bias was assessed by means of funnel plot and Egger's test of asymmetry. Nine articles representing eight randomized controlled trials were included in the meta-analysis. Most studies were of low risk of bias. The pooled effect size showed problem-based learning was able to improve nursing students' critical thinking (overall critical thinking scores SMD=0.33, 95%CI=0.13-0.52, P=0.0009), compared with traditional lectures. There was low heterogeneity (overall critical thinking scores I(2)=45%, P=0.07) in the meta-analysis. No significant publication bias was observed regarding overall critical thinking scores (P=0.536). Sensitivity analysis showed that the result of our meta-analysis was reliable. Most effect sizes for subscales of the California Critical Thinking Dispositions Inventory (CCTDI) and Bloom's Taxonomy favored problem-based learning, while effect sizes for all subscales of the California Critical Thinking Skills Test (CCTST) and most subscales of the Watson-Glaser Critical Thinking Appraisal (WCGTA) were inconclusive. The results of the current meta-analysis indicate that problem-based learning might help nursing students to improve their critical thinking. More research with larger sample size and high quality in different nursing educational contexts are required. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. The Influence of Framing on Risky Decisions: A Meta-analysis.

    PubMed

    Kühberger

    1998-07-01

    In framing studies, logically equivalent choice situations are differently described and the resulting preferences are studied. A meta-analysis of framing effects is presented for risky choice problems which are framed either as gains or as losses. This evaluates the finding that highlighting the positive aspects of formally identical problems does lead to risk aversion and that highlighting their equivalent negative aspects does lead to risk seeking. Based on a data pool of 136 empirical papers that reported framing experiments with nearly 30,000 participants, we calculated 230 effect sizes. Results show that the overall framing effect between conditions is of small to moderate size and that profound differences exist between research designs. Potentially relevant characteristics were coded for each study. The most important characteristics were whether framing is manipulated by changing reference points or by manipulating outcome salience, and response mode (choice vs. rating/judgment). Further important characteristics were whether options differ qualitatively or quantitatively in risk, whether there is one or multiple risky events, whether framing is manipulated by gain/loss or by task-responsive wording, whether dependent variables are measured between- or within- subjects, and problem domains. Sample (students vs. target populations) and unit of analysis (individual vs. group) was not influential. It is concluded that framing is a reliable phenomenon, but that outcome salience manipulations, which constitute a considerable amount of work, have to be distinguished from reference point manipulations and that procedural features of experimental settings have a considerable effect on effect sizes in framing experiments. Copyright 1998 Academic Press.

  19. Noninferiority trial designs for odds ratios and risk differences.

    PubMed

    Hilton, Joan F

    2010-04-30

    This study presents constrained maximum likelihood derivations of the design parameters of noninferiority trials for binary outcomes with the margin defined on the odds ratio (ψ) or risk-difference (δ) scale. The derivations show that, for trials in which the group-specific response rates are equal under the point-alternative hypothesis, the common response rate, π(N), is a fixed design parameter whose value lies between the control and experimental rates hypothesized at the point-null, {π(C), π(E)}. We show that setting π(N) equal to the value of π(C) that holds under H(0) underestimates the overall sample size requirement. Given {π(C), ψ} or {π(C), δ} and the type I and II error rates, or algorithm finds clinically meaningful design values of π(N), and the corresponding minimum asymptotic sample size, N=n(E)+n(C), and optimal allocation ratio, γ=n(E)/n(C). We find that optimal allocations are increasingly imbalanced as ψ increases, with γ(ψ)<1 and γ(δ)≈1/γ(ψ), and that ranges of allocation ratios map to the minimum sample size. The latter characteristic allows trialists to consider trade-offs between optimal allocation at a smaller N and a preferred allocation at a larger N. For designs with relatively large margins (e.g. ψ>2.5), trial results that are presented on both scales will differ in power, with more power lost if the study is designed on the risk-difference scale and reported on the odds ratio scale than vice versa. 2010 John Wiley & Sons, Ltd.

  20. Assessment of region, farming system, irrigation source and sampling time as food safety risk factors for tomatoes.

    PubMed

    Pagadala, Sivaranjani; Marine, Sasha C; Micallef, Shirley A; Wang, Fei; Pahl, Donna M; Melendez, Meredith V; Kline, Wesley L; Oni, Ruth A; Walsh, Christopher S; Everts, Kathryne L; Buchanan, Robert L

    2015-03-02

    In the mid-Atlantic region of the United States, small- and medium-sized farmers use varied farm management methods and water sources to produce tomatoes. It is unclear whether these practices affect the food safety risk for tomatoes. This study was conducted to determine the prevalence, and assess risk factors for Salmonella enterica, Shiga toxin-producing Escherichia coli (STEC) and bacterial indicators in pre-harvest tomatoes and their production areas. A total of 24 organic and conventional, small- to medium-sized farms were sampled for six weeks in Maryland (MD), Delaware (DE) and New Jersey (NJ) between July and September 2012, and analyzed for indicator bacteria, Salmonella and STEC. A total of 422 samples--tomato fruit, irrigation water, compost, field soil and pond sediment samples--were collected, 259 of which were tomato samples. A low level of Salmonella-specific invA and Shiga toxin genes (stx1 or stx2) were detected, but no Salmonella or STEC isolates were recovered. Of the 422 samples analyzed, 9.5% were positive for generic E. coli, found in 5.4% (n=259) of tomato fruits, 22.5% (n=102) of irrigation water, 8.9% (n=45) of soil, 3/9 of pond sediment and 0/7 of compost samples. For tomato fruit, farming system (organic versus conventional) was not a significant factor for levels of indicator bacteria. However, the total number of organic tomato samples positive for generic E. coli (1.6%; 2/129) was significantly lower than for conventional tomatoes (6.9% (9/130); (χ(2) (1)=4.60, p=0.032)). Region was a significant factor for levels of Total Coliforms (TC) (p=0.046), although differences were marginal, with western MD having the highest TC counts (2.6 log CFU/g) and NJ having the lowest (2.0 log CFU/g). Tomatoes touching the ground or plastic mulch harbored significantly higher levels of TC compared to vine tomatoes, signaling a potential risk factor. Source of irrigation water was a significant factor for all indicator bacteria (p<0.0001), and groundwater had lower bacterial levels than surface water. End of line surface water samples were not significantly different from source water samples, but end of line groundwater samples had significantly higher bacterial counts than source (p<0.0001), suggesting that Good Agricultural Practices that focus on irrigation line maintenance might be beneficial. In general, local effects other than cropping practices, including topography, land use and adjacent industries, might be important factors contributing to microbiological inputs on small- and medium-sized farms in the mid-Atlantic region. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Probabalistic Risk Assessment of a Turbine Disk

    NASA Astrophysics Data System (ADS)

    Carter, Jace A.; Thomas, Michael; Goswami, Tarun; Fecke, Ted

    Current Federal Aviation Administration (FAA) rotor design certification practices risk assessment using a probabilistic framework focused on only the life-limiting defect location of a component. This method generates conservative approximations of the operational risk. The first section of this article covers a discretization method, which allows for a transition from this relative risk to an absolute risk where the component is discretized into regions called zones. General guidelines were established for the zone-refinement process based on the stress gradient topology in order to reach risk convergence. The second section covers a risk assessment method for predicting the total fatigue life due to fatigue induced damage. The total fatigue life incorporates a dual mechanism approach including the crack initiation life and propagation life while simultaneously determining the associated initial flaw sizes. A microstructure-based model was employed to address uncertainties in material response and relate crack initiation life with crack size, while propagation life was characterized large crack growth laws. The two proposed methods were applied to a representative Inconel 718 turbine disk. The zone-based method reduces the conservative approaches, while showing effects of feature-based inspection on the risk assessment. In the fatigue damage assessment, the predicted initial crack distribution was found to be the most sensitive probabilistic parameter and can be used to establish an enhanced inspection planning.

  2. ENHANCEMENT OF LEARNING ON SAMPLE SIZE CALCULATION WITH A SMARTPHONE APPLICATION: A CLUSTER-RANDOMIZED CONTROLLED TRIAL.

    PubMed

    Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz

    2017-01-01

    Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.

  3. Risk of second primary cancers after testicular cancer in East and West Germany: A focus on contralateral testicular cancers

    PubMed Central

    Rusner, Carsten; Streller, Brigitte; Stegmaier, Christa; Trocchi, Pietro; Kuss, Oliver; McGlynn, Katherine A; Trabert, Britton; Stang, Andreas

    2014-01-01

    Testicular cancer survival rates improved dramatically after cisplatin-based therapy was introduced in the 1970s. However, chemotherapy and radiation therapy are potentially carcinogenic. The purpose of this study was to estimate the risk of developing second primary cancers including the risk associated with primary histologic type (seminoma and non-seminoma) among testicular cancer survivors in Germany. We identified 16 990 and 1401 cases of testicular cancer in population-based cancer registries of East Germany (1961–1989 and 1996–2008) and Saarland (a federal state in West Germany; 1970–2008), respectively. We estimated the risk of a second primary cancer using standardized incidence ratios (SIRs) with 95% confidence intervals (95% CIs). To determine trends, we plotted model-based estimated annual SIRs. In East Germany, a total of 301 second primary cancers of any location were observed between 1961 and 1989 (SIR: 1.9; 95% CI: 1.7–2.1), and 159 cancers (any location) were observed between 1996 and 2008 (SIR: 1.7; 95% CI: 1.4–2.0). The SIRs for contralateral testicular cancer were increased in the registries with a range from 6.0 in Saarland to 13.9 in East Germany. The SIR for seminoma, in particular, was higher in East Germany compared to the other registries. We observed constant trends in the model-based SIRs for contralateral testicular cancers. The majority of reported SIRs of other cancer sites including histology-specific risks showed low precisions of estimated effects, likely due to small sample sizes. Testicular cancer patients are at increased risk especially for cancers of the contralateral testis and should receive intensive follow-ups. PMID:24407180

  4. The role of FKBP5 genotype in moderating long-term effectiveness of exposure-based psychotherapy for posttraumatic stress disorder.

    PubMed

    Wilker, S; Pfeiffer, A; Kolassa, S; Elbert, T; Lingenfelder, B; Ovuga, E; Papassotiropoulos, A; de Quervain, D; Kolassa, I-T

    2014-06-24

    Exposure-based therapies are considered the state-of-the-art treatment for Posttraumatic Stress Disorder (PTSD). Yet, a substantial number of PTSD patients do not recover after therapy. In the light of the well-known gene × environment interactions on the risk for PTSD, research on individual genetic factors that influence treatment success is warranted. The gene encoding FK506-binding protein 51 (FKBP5), a co-chaperone of the glucocorticoid receptor (GR), has been associated with stress reactivity and PTSD risk. As FKBP5 single-nucleotide polymorphism rs1360780 has a putative functional role in the regulation of FKBP5 expression and GR sensitivity, we hypothesized that this polymorphism influences PTSD treatment success. We investigated the effects of FKBP5 rs1360780 genotype on Narrative Exposure Therapy (NET) outcome, an exposure-based short-term therapy, in a sample of 43 survivors of the rebel war in Northern Uganda. PTSD symptom severity was assessed before and 4 and 10 months after treatment completion. At the 4-month follow-up, there were no genotype-dependent differences in therapy outcome. However, the FKBP5 genotype significantly moderated the long-term effectiveness of exposure-based psychotherapy. At the 10-month follow-up, carriers of the rs1360780 risk (T) allele were at increased risk of symptom relapse, whereas non-carriers showed continuous symptom reduction. This effect was reflected in a weaker treatment effect size (Cohen's D=1.23) in risk allele carriers compared with non-carriers (Cohen's D=3.72). Genetic factors involved in stress response regulation seem to not only influence PTSD risk but also responsiveness to psychotherapy and could hence represent valuable targets for accompanying medication.

  5. The role of FKBP5 genotype in moderating long-term effectiveness of exposure-based psychotherapy for posttraumatic stress disorder

    PubMed Central

    Wilker, S; Pfeiffer, A; Kolassa, S; Elbert, T; Lingenfelder, B; Ovuga, E; Papassotiropoulos, A; de Quervain, D; Kolassa, I-T

    2014-01-01

    Exposure-based therapies are considered the state-of-the-art treatment for Posttraumatic Stress Disorder (PTSD). Yet, a substantial number of PTSD patients do not recover after therapy. In the light of the well-known gene × environment interactions on the risk for PTSD, research on individual genetic factors that influence treatment success is warranted. The gene encoding FK506-binding protein 51 (FKBP5), a co-chaperone of the glucocorticoid receptor (GR), has been associated with stress reactivity and PTSD risk. As FKBP5 single-nucleotide polymorphism rs1360780 has a putative functional role in the regulation of FKBP5 expression and GR sensitivity, we hypothesized that this polymorphism influences PTSD treatment success. We investigated the effects of FKBP5 rs1360780 genotype on Narrative Exposure Therapy (NET) outcome, an exposure-based short-term therapy, in a sample of 43 survivors of the rebel war in Northern Uganda. PTSD symptom severity was assessed before and 4 and 10 months after treatment completion. At the 4-month follow-up, there were no genotype-dependent differences in therapy outcome. However, the FKBP5 genotype significantly moderated the long-term effectiveness of exposure-based psychotherapy. At the 10-month follow-up, carriers of the rs1360780 risk (T) allele were at increased risk of symptom relapse, whereas non-carriers showed continuous symptom reduction. This effect was reflected in a weaker treatment effect size (Cohen's D=1.23) in risk allele carriers compared with non-carriers (Cohen's D=3.72). Genetic factors involved in stress response regulation seem to not only influence PTSD risk but also responsiveness to psychotherapy and could hence represent valuable targets for accompanying medication. PMID:24959896

  6. Science-based decision making in a high-risk energy production environment

    NASA Astrophysics Data System (ADS)

    Weiser, D. A.

    2016-12-01

    Energy production practices that may induce earthquakes require decisions about acceptable risk before projects begin. How much ground shaking, structural damage, infrastructure damage, or delay of geothermal power and other operations is tolerable? I review a few mitigation strategies as well as existing protocol in several U.S. states. Timely and accurate scientific information can assist in determining the costs and benefits of altering production parameters. These issues can also be addressed with probability estimates of adverse effects ("costs"), frequency of earthquakes of different sizes, and associated impacts of different magnitude earthquakes. When risk management decisions based on robust science are well-communicated to stakeholders, mitigation efforts benefit. Effective communications elements include a) the risks and benefits of different actions (e.g. using a traffic light protocol); b) the factors to consider when determining acceptable risk; and c) the probability of different magnitude events. I present a case example for The Geysers geothermal field in California, to discuss locally "acceptable" and "unacceptable" earthquakes and share nearby communities' responses to smaller and larger magnitude earthquakes. I use the USGS's "Did You Feel It?" data archive to sample how often felt events occur, and how many of those are above acceptable magnitudes (to both local residents and operators). Using this information, I develop a science-based decision-making framework, in the case of potentially risky earthquakes, for lessening seismic risk and other negative consequences. This includes assessing future earthquake probabilities based on past earthquake records. One of my goals is to help characterize uncertainties in a way that they can be managed; to this end, I present simple and accessible approaches that can be used in the decision making process.

  7. An Ai Chi-based aquatic group improves balance and reduces falls in community-dwelling adults: A pilot observational cohort study.

    PubMed

    Skinner, Elizabeth H; Dinh, Tammy; Hewitt, Melissa; Piper, Ross; Thwaites, Claire

    2016-11-01

    Falls are associated with morbidity, loss of independence, and mortality. While land-based group exercise and Tai Chi programs reduce the risk of falls, aquatic therapy may allow patients to complete balance exercises with less pain and fear of falling; however, limited data exist. The objective of the study was to pilot the implementation of an aquatic group based on Ai Chi principles (Aquabalance) and to evaluate the safety, intervention acceptability, and intervention effect sizes. Pilot observational cohort study. Forty-two outpatients underwent a single 45-minute weekly group aquatic Ai Chi-based session for eight weeks (Aquabalance). Safety was monitored using organizational reporting systems. Patient attendance, satisfaction, and self-reported falls were also recorded. Balance measures included the Timed Up and Go (TUG) test, the Four Square Step Test (FSST), and the unilateral Step Tests. Forty-two patients completed the program. It was feasible to deliver Aquabalance, as evidenced by the median (IQR) attendance rate of 8.0 (7.8, 8.0) out of 8. No adverse events occurred and participants reported high satisfaction levels. Improvements were noted on the TUG, 10-meter walk test, the Functional Reach Test, the FSST, and the unilateral step tests (p < 0.05). The proportion of patients defined as high falls risk reduced from 38% to 21%. The study was limited by its small sample size, single-center nature, and the absence of a control group. Aquabalance was safe, well-attended, and acceptable to participants. A randomized controlled assessor-blinded trial is required.

  8. Active animal health surveillance in European Union Member States: gaps and opportunities.

    PubMed

    Bisdorff, B; Schauer, B; Taylor, N; Rodríguez-Prieto, V; Comin, A; Brouwer, A; Dórea, F; Drewe, J; Hoinville, L; Lindberg, A; Martinez Avilés, M; Martínez-López, B; Peyre, M; Pinto Ferreira, J; Rushton, J; VAN Schaik, G; Stärk, K D C; Staubach, C; Vicente-Rubiano, M; Witteveen, G; Pfeiffer, D; Häsler, B

    2017-03-01

    Animal health surveillance enables the detection and control of animal diseases including zoonoses. Under the EU-FP7 project RISKSUR, a survey was conducted in 11 EU Member States and Switzerland to describe active surveillance components in 2011 managed by the public or private sector and identify gaps and opportunities. Information was collected about hazard, target population, geographical focus, legal obligation, management, surveillance design, risk-based sampling, and multi-hazard surveillance. Two countries were excluded due to incompleteness of data. Most of the 664 components targeted cattle (26·7%), pigs (17·5%) or poultry (16·0%). The most common surveillance objectives were demonstrating freedom from disease (43·8%) and case detection (26·8%). Over half of components applied risk-based sampling (57·1%), but mainly focused on a single population stratum (targeted risk-based) rather than differentiating between risk levels of different strata (stratified risk-based). About a third of components were multi-hazard (37·3%). Both risk-based sampling and multi-hazard surveillance were used more frequently in privately funded components. The study identified several gaps (e.g. lack of systematic documentation, inconsistent application of terminology) and opportunities (e.g. stratified risk-based sampling). The greater flexibility provided by the new EU Animal Health Law means that systematic evaluation of surveillance alternatives will be required to optimize cost-effectiveness.

  9. Evidence on existing caries risk assessment systems: are they predictive of future caries?

    PubMed

    Tellez, M; Gomez, J; Pretty, I; Ellwood, R; Ismail, A I

    2013-02-01

    To critically appraise evidence for the prediction of caries using four caries risk assessment (CRA) systems/guidelines (Cariogram, Caries Management by Risk Assessment (CAMBRA), American Dental Association (ADA), and American Academy of Pediatric Dentistry (AAPD)). This review focused on prospective cohort studies or randomized controlled trials. A systematic search strategy was developed to locate papers published in Medline Ovid and Cochrane databases. The search identified 539 scientific reports, and after title and abstract review, 137 were selected for full review and 14 met the following inclusion criteria: (i) used as validating criterion caries incidence/increment, (ii) involved human subjects and natural carious lesions, and (iii) published in peer-reviewed journals. In addition, papers were excluded if they met one or more of the following criteria: (i) incomplete description of sample selection, outcomes, or small sample size and (ii) not meeting the criteria for best evidence under the prognosis category of the Oxford Centre for Evidence-Based Medicine. There are wide variations among the systems in terms of definitions of caries risk categories, type and number of risk factors/markers, and disease indicators. The Cariogram combined sensitivity and specificity for predicting caries in permanent dentition ranges from 110 to 139 and is the only system for which prospective studies have been conducted to assess its validity. The Cariogram had limited prediction utility in preschool children, and a moderate to good performance for sorting out elderly individuals into caries risk groups. One retrospective analysis on CAMBRA's CRA reported higher incidence of cavitated lesions among those assessed as extreme-risk patients when compared with those at low risk. The evidence on the validity for existing systems for CRA is limited. It is unknown if the identification of high-risk individuals can lead to more effective long-term patient management that prevents caries initiation and arrests or reverses the progression of lesions. There is an urgent need to develop valid and reliable methods for caries risk assessment that are based on best evidence for prediction and disease management rather than opinions of experts.

  10. PTEN IDENTIFIED AS IMPORTANT RISK FACTOR OF CHRONIC OBSTRUCTIVE PULMONARY DISEASE

    PubMed Central

    Hosgood, H Dean; Menashe, Idan; He, Xingzhou; Chanock, Stephen; Lan, Qing

    2009-01-01

    Common genetic variation may play an important role in altering chronic obstructive pulmonary disease (COPD) risk. In Xuanwei, China, the COPD rate is more than twice the Chinese national average, and COPD is strongly associated with in-home coal use. To identify genetic variation that may be associated with COPD in a population with substantial in-home coal smoke exposures, we evaluated 1,261 single nucleotide polymorphisms (SNPs) in 380 candidate genes potentially relevant for cancer and other human diseases in a population-based case-control study in Xuanwei (53 cases; 107 controls). PTEN was the most significantly associated gene with COPD in a minP analysis using 20,000 permutations (P = 0.00005). SNP-based analyses found that homozygote variant carriers of PTEN rs701848 (ORTT = 0.12, 95%CI = 0.03 - 0.47) had a significant decreased risk of COPD. PTEN, or phosphatase and tensin homolog, is an important regulator of cell cycle progression and cellular survival via the AKT signaling pathway. Our exploratory analysis suggests that genetic variation in PTEN may be an important risk factor of COPD in Xuanwei. However, due to the small sample size, additional studies are needed to evaluate these associations within Xuanwei and other populations with coal smoke exposures. PMID:19625176

  11. SOARCA Peach Bottom Atomic Power Station Long-Term Station Blackout Uncertainty Analysis: Convergence of the Uncertainty Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bixler, Nathan E.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie

    2014-02-01

    This paper describes the convergence of MELCOR Accident Consequence Code System, Version 2 (MACCS2) probabilistic results of offsite consequences for the uncertainty analysis of the State-of-the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout scenario at the Peach Bottom Atomic Power Station. The consequence metrics evaluated are individual latent-cancer fatality (LCF) risk and individual early fatality risk. Consequence results are presented as conditional risk (i.e., assuming the accident occurs, risk per event) to individuals of the public as a result of the accident. In order to verify convergence for this uncertainty analysis, as recommended by the Nuclear Regulatory Commission’s Advisorymore » Committee on Reactor Safeguards, a ‘high’ source term from the original population of Monte Carlo runs has been selected to be used for: (1) a study of the distribution of consequence results stemming solely from epistemic uncertainty in the MACCS2 parameters (i.e., separating the effect from the source term uncertainty), and (2) a comparison between Simple Random Sampling (SRS) and Latin Hypercube Sampling (LHS) in order to validate the original results obtained with LHS. Three replicates (each using a different random seed) of size 1,000 each using LHS and another set of three replicates of size 1,000 using SRS are analyzed. The results show that the LCF risk results are well converged with either LHS or SRS sampling. The early fatality risk results are less well converged at radial distances beyond 2 miles, and this is expected due to the sparse data (predominance of “zero” results).« less

  12. Integrative assessment of multiple pesticides as risk factors for non-Hodgkin's lymphoma among men.

    PubMed

    De Roos, A J; Zahm, S H; Cantor, K P; Weisenburger, D D; Holmes, F F; Burmeister, L F; Blair, A

    2003-09-01

    An increased rate of non-Hodgkin's lymphoma (NHL) has been repeatedly observed among farmers, but identification of specific exposures that explain this observation has proven difficult. During the 1980s, the National Cancer Institute conducted three case-control studies of NHL in the midwestern United States. These pooled data were used to examine pesticide exposures in farming as risk factors for NHL in men. The large sample size (n = 3417) allowed analysis of 47 pesticides simultaneously, controlling for potential confounding by other pesticides in the model, and adjusting the estimates based on a prespecified variance to make them more stable. Reported use of several individual pesticides was associated with increased NHL incidence, including organophosphate insecticides coumaphos, diazinon, and fonofos, insecticides chlordane, dieldrin, and copper acetoarsenite, and herbicides atrazine, glyphosate, and sodium chlorate. A subanalysis of these "potentially carcinogenic" pesticides suggested a positive trend of risk with exposure to increasing numbers. Consideration of multiple exposures is important in accurately estimating specific effects and in evaluating realistic exposure scenarios.

  13. Fast and precise dense grid size measurement method based on coaxial dual optical imaging system

    NASA Astrophysics Data System (ADS)

    Guo, Jiping; Peng, Xiang; Yu, Jiping; Hao, Jian; Diao, Yan; Song, Tao; Li, Ameng; Lu, Xiaowei

    2015-10-01

    Test sieves with dense grid structure are widely used in many fields, accurate gird size calibration is rather critical for success of grading analysis and test sieving. But traditional calibration methods suffer from the disadvantages of low measurement efficiency and shortage of sampling number of grids which could lead to quality judgment risk. Here, a fast and precise test sieve inspection method is presented. Firstly, a coaxial imaging system with low and high optical magnification probe is designed to capture the grid images of the test sieve. Then, a scaling ratio between low and high magnification probes can be obtained by the corresponding grids in captured images. With this, all grid dimensions in low magnification image can be obtained by measuring few corresponding grids in high magnification image with high accuracy. Finally, by scanning the stage of the tri-axis platform of the measuring apparatus, whole surface of the test sieve can be quickly inspected. Experiment results show that the proposed method can measure the test sieves with higher efficiency compare to traditional methods, which can measure 0.15 million grids (gird size 0.1mm) within only 60 seconds, and it can measure grid size range from 20μm to 5mm precisely. In a word, the presented method can calibrate the grid size of test sieve automatically with high efficiency and accuracy. By which, surface evaluation based on statistical method can be effectively implemented, and the quality judgment will be more reasonable.

  14. Study design requirements for RNA sequencing-based breast cancer diagnostics.

    PubMed

    Mer, Arvind Singh; Klevebring, Daniel; Grönberg, Henrik; Rantalainen, Mattias

    2016-02-01

    Sequencing-based molecular characterization of tumors provides information required for individualized cancer treatment. There are well-defined molecular subtypes of breast cancer that provide improved prognostication compared to routine biomarkers. However, molecular subtyping is not yet implemented in routine breast cancer care. Clinical translation is dependent on subtype prediction models providing high sensitivity and specificity. In this study we evaluate sample size and RNA-sequencing read requirements for breast cancer subtyping to facilitate rational design of translational studies. We applied subsampling to ascertain the effect of training sample size and the number of RNA sequencing reads on classification accuracy of molecular subtype and routine biomarker prediction models (unsupervised and supervised). Subtype classification accuracy improved with increasing sample size up to N = 750 (accuracy = 0.93), although with a modest improvement beyond N = 350 (accuracy = 0.92). Prediction of routine biomarkers achieved accuracy of 0.94 (ER) and 0.92 (Her2) at N = 200. Subtype classification improved with RNA-sequencing library size up to 5 million reads. Development of molecular subtyping models for cancer diagnostics requires well-designed studies. Sample size and the number of RNA sequencing reads directly influence accuracy of molecular subtyping. Results in this study provide key information for rational design of translational studies aiming to bring sequencing-based diagnostics to the clinic.

  15. Assessing diet in populations at risk for konzo and neurolathyrism.

    PubMed

    Dufour, Darna L

    2011-03-01

    Although both konzo and neurolathyrism are diseases associated with diet, we know surprising little about the diets of the groups at risk. The objective of this paper is to discuss methods for assessing dietary intake in populations at risk for konzo and lathyrism. These methods include weighed food records and interview based techniques like 24-h recalls and food frequency questionnaires (FFQs). Food records have the potential to provide accurate information on food quantities, and are generally the method of choice. Interview based methods provide less precise information on the quantities of foods ingested, and are subject to recall bias, but may be useful in some studies or for surveillance. Sample size needs to be adequate to account for day-to-day and seasonal variability in food intake, and differences between age and sex groups. Adequate data on the composition of foods, as actually consumed, are needed to evaluate the food intake information. This is especially important in the case of cassava and grass pea where the toxins in the diet is a function of processing. Biomarkers for assessing the cyanogen exposure from cassava-based diets are available; biomarkers for the β-ODAP exposure from grass pea diets need development. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. Fecal parasite risk in the endangered proboscis monkey is higher in an anthropogenically managed forest environment compared to a riparian rain forest in Sabah, Borneo

    PubMed Central

    Klaus, Annette; Strube, Christina; Röper, Kathrin Monika; Radespiel, Ute; Schaarschmidt, Frank; Nathan, Senthilvel; Goossens, Benoit

    2018-01-01

    Understanding determinants shaping infection risk of endangered wildlife is a major topic in conservation medicine. The proboscis monkey, Nasalis larvatus, an endemic primate flagship species for conservation in Borneo, is endangered through habitat loss, but can still be found in riparian lowland and mangrove forests, and in some protected areas. To assess socioecological and anthropogenic influence on intestinal helminth infections in N. larvatus, 724 fecal samples of harem and bachelor groups, varying in size and the number of juveniles, were collected between June and October 2012 from two study sites in Malaysian Borneo: 634 samples were obtained from groups inhabiting the Lower Kinabatangan Wildlife Sanctuary (LKWS), 90 samples were collected from groups of the Labuk Bay Proboscis Monkey Sanctuary (LBPMS), where monkeys are fed on stationary feeding platforms. Parasite risk was quantified by intestinal helminth prevalence, host parasite species richness (PSR), and eggs per gram feces (epg). Generalized linear mixed effect models were applied to explore whether study site, group type, group size, the number of juveniles per group, and sampling month predict parasite risk. At the LBPMS, prevalence and epg of Trichuris spp., strongylids, and Strongyloides spp. but not Ascaris spp., as well as host PSR were significantly elevated. Only for Strongyloides spp., prevalence showed significant changes between months; at both sites, the beginning rainy season with increased precipitation was linked to higher prevalence, suggesting the external life cycle of Strongyloides spp. to benefit from humidity. Higher prevalence, epgs, and PSR within the LBPMS suggest that anthropogenic factors shape host infection risk more than socioecological factors, most likely via higher re-infection rates and chronic stress. Noninvasive measurement of fecal parasite stages is an important tool for assessing transmission dynamics and infection risks for endangered tropical wildlife. Findings will contribute to healthcare management in nature and in anthropogenically managed environments. PMID:29630671

  17. Fecal parasite risk in the endangered proboscis monkey is higher in an anthropogenically managed forest environment compared to a riparian rain forest in Sabah, Borneo.

    PubMed

    Klaus, Annette; Strube, Christina; Röper, Kathrin Monika; Radespiel, Ute; Schaarschmidt, Frank; Nathan, Senthilvel; Goossens, Benoit; Zimmermann, Elke

    2018-01-01

    Understanding determinants shaping infection risk of endangered wildlife is a major topic in conservation medicine. The proboscis monkey, Nasalis larvatus, an endemic primate flagship species for conservation in Borneo, is endangered through habitat loss, but can still be found in riparian lowland and mangrove forests, and in some protected areas. To assess socioecological and anthropogenic influence on intestinal helminth infections in N. larvatus, 724 fecal samples of harem and bachelor groups, varying in size and the number of juveniles, were collected between June and October 2012 from two study sites in Malaysian Borneo: 634 samples were obtained from groups inhabiting the Lower Kinabatangan Wildlife Sanctuary (LKWS), 90 samples were collected from groups of the Labuk Bay Proboscis Monkey Sanctuary (LBPMS), where monkeys are fed on stationary feeding platforms. Parasite risk was quantified by intestinal helminth prevalence, host parasite species richness (PSR), and eggs per gram feces (epg). Generalized linear mixed effect models were applied to explore whether study site, group type, group size, the number of juveniles per group, and sampling month predict parasite risk. At the LBPMS, prevalence and epg of Trichuris spp., strongylids, and Strongyloides spp. but not Ascaris spp., as well as host PSR were significantly elevated. Only for Strongyloides spp., prevalence showed significant changes between months; at both sites, the beginning rainy season with increased precipitation was linked to higher prevalence, suggesting the external life cycle of Strongyloides spp. to benefit from humidity. Higher prevalence, epgs, and PSR within the LBPMS suggest that anthropogenic factors shape host infection risk more than socioecological factors, most likely via higher re-infection rates and chronic stress. Noninvasive measurement of fecal parasite stages is an important tool for assessing transmission dynamics and infection risks for endangered tropical wildlife. Findings will contribute to healthcare management in nature and in anthropogenically managed environments.

  18. A Probabilistic Asteroid Impact Risk Model

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.

    2016-01-01

    Asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data has little effect on the metrics of interest.

  19. Characteristics of individuals who make impulsive suicide attempts.

    PubMed

    Spokas, Megan; Wenzel, Amy; Brown, Gregory K; Beck, Aaron T

    2012-02-01

    Previous research has identified only a few variables that have been associated with making an impulsive suicide attempt. The aim of the current study was to compare individuals who made an impulsive suicide attempt with those who made a premeditated attempt on both previously examined and novel characteristics. Participants were classified as making an impulsive or premeditated attempt based on the Suicide Intent Scale (Beck et al., 1974a) and were compared on a number of characteristics relevant to suicidality, psychiatric history, and demographics. Individuals who made an impulsive attempt expected that their attempts would be less lethal; yet the actual lethality of both groups' attempts was similar. Those who made an impulsive attempt were less depressed and hopeless than those who made a premeditated attempt. Participants who made an impulsive attempt were less likely to report a history of childhood sexual abuse and more likely to be diagnosed with an alcohol use disorder than those who made a premeditated attempt. Although the sample size was adequate for bivariate statistics, future studies using larger sample sizes will allow for multivariate analyses of characteristics that differentiate individuals who make impulsive and premeditated attempts. Clinicians should not minimize the significance of impulsive attempts, as they are associated with a similar level of lethality as premeditated attempts. Focusing mainly on depression and hopelessness as indicators of suicide risk has the potential to under-identify those who are at risk for making impulsive attempts. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. The female stalker.

    PubMed

    Meloy, J Reid; Mohandie, Kris; Green, Mila

    2011-01-01

    A study of 143 female stalkers was conducted, part of a large North American sample of stalkers (N=1005) gathered from law enforcement, prosecutorial, and entertainment corporate security files (Mohandie, Meloy, Green McGowan, & Williams, 2006). The typical female stalker was a single, separated, or divorced woman in her mid-30s with a psychiatric diagnosis, most often a mood disorder. She was more likely to pursue a male acquaintance, stranger, or celebrity, rather than a prior sexual intimate. When compared with male stalkers, the female stalkers had significantly less frequent criminal histories, and were significantly less threatening and violent. Their pursuit behavior was less proximity based, and their communications were more benign than those of the males. The average duration of stalking was 17 months, but the modal duration was two months. Stalking recidivism was 50%, with modal time between intervention and re-contacting the victim of one day. Any prior actual relationship (sexual intimate or acquaintance) significantly increased the frequency of threats and violence with large effect sizes for the entire female sample. The most dangerous subgroup was the prior sexually intimate stalkers, of whom the majority both threatened and were physically violent. The least dangerous were the female stalkers of Hollywood celebrities. Two of the McEwan, Mullen, MacKenzie, and Ogloff (2009b) predictor variables for stalking violence among men were externally validated with moderate effect sizes for the women: threats were associated with increased risk of violence, and letter writing was associated with decreased risk of violence. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Innovative recruitment using online networks: lessons learned from an online study of alcohol and other drug use utilizing a web-based, respondent-driven sampling (webRDS) strategy.

    PubMed

    Bauermeister, José A; Zimmerman, Marc A; Johns, Michelle M; Glowacki, Pietreck; Stoddard, Sarah; Volz, Erik

    2012-09-01

    We used a web version of Respondent-Driven Sampling (webRDS) to recruit a sample of young adults (ages 18-24) and examined whether this strategy would result in alcohol and other drug (AOD) prevalence estimates comparable to national estimates (National Survey on Drug Use and Health [NSDUH]). We recruited 22 initial participants (seeds) via Facebook to complete a web survey examining AOD risk correlates. Sequential, incentivized recruitment continued until our desired sample size was achieved. After correcting for webRDS clustering effects, we contrasted our AOD prevalence estimates (past 30 days) to NSDUH estimates by comparing the 95% confidence intervals of prevalence estimates. We found comparable AOD prevalence estimates between our sample and NSDUH for the past 30 days for alcohol, marijuana, cocaine, Ecstasy (3,4-methylenedioxymethamphetamine, or MDMA), and hallucinogens. Cigarette use was lower than NSDUH estimates. WebRDS may be a suitable strategy to recruit young adults online. We discuss the unique strengths and challenges that may be encountered by public health researchers using webRDS methods.

  2. Stratifying empiric risk of schizophrenia among first degree relatives using multiple predictors in two independent Indian samples.

    PubMed

    Bhatia, Triptish; Gettig, Elizabeth A; Gottesman, Irving I; Berliner, Jonathan; Mishra, N N; Nimgaonkar, Vishwajit L; Deshpande, Smita N

    2016-12-01

    Schizophrenia (SZ) has an estimated heritability of 64-88%, with the higher values based on twin studies. Conventionally, family history of psychosis is the best individual-level predictor of risk, but reliable risk estimates are unavailable for Indian populations. Genetic, environmental, and epigenetic factors are equally important and should be considered when predicting risk in 'at risk' individuals. To estimate risk based on an Indian schizophrenia participant's family history combined with selected demographic factors. To incorporate variables in addition to family history, and to stratify risk, we constructed a regression equation that included demographic variables in addition to family history. The equation was tested in two independent Indian samples: (i) an initial sample of SZ participants (N=128) with one sibling or offspring; (ii) a second, independent sample consisting of multiply affected families (N=138 families, with two or more sibs/offspring affected with SZ). The overall estimated risk was 4.31±0.27 (mean±standard deviation). There were 19 (14.8%) individuals in the high risk group, 75 (58.6%) in the moderate risk and 34 (26.6%) in the above average risk (in Sample A). In the validation sample, risks were distributed as: high (45%), moderate (38%) and above average (17%). Consistent risk estimates were obtained from both samples using the regression equation. Familial risk can be combined with demographic factors to estimate risk for SZ in India. If replicated, the proposed stratification of risk may be easier and more realistic for family members. Copyright © 2016. Published by Elsevier B.V.

  3. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    PubMed

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  4. General Factors of the Korean Exposure Factors Handbook

    PubMed Central

    Kim, So-Yeon; Kim, Sun-Ja; Lee, Kyung-Eun; Cheong, Hae-Kwan; Kim, Eun-Hye; Choi, Kyung-Ho; Kim, Young-Hee

    2014-01-01

    Risk assessment considers the situations and characteristics of the exposure environment and host. Various physiological variables of the human body reflects the characteristics of the population that can directly influence risk exposure. Therefore, identification of exposure factors based on the Korean population is required for appropriate risk assessment. It is expected that a handbook about general exposure factors will be used by professionals in many fields as well as the risk assessors of the health department. The process of developing the exposure factors handbook for the Korean population will be introduced in this article, with a specific focus on the general exposure factors including life expectancy, body weight, surface area, inhalation rates, amount of water intake, and soil ingestion targeting the Korean population. The researchers used national databases including the Life Table and the 2005 Time Use Survey from the National Statistical Office. The anthropometric study of size in Korea used the resources provided by the Korean Agency for Technology and Standards. In addition, direct measurement and questionnaire surveys of representative samples were performed to calculate the inhalation rate, drinking water intake, and soil ingestion. PMID:24570802

  5. Risk factors for persistent gestational trophoblastic neoplasia.

    PubMed

    Kuyumcuoglu, Umur; Guzel, Ali Irfan; Erdemoglu, Mahmut; Celik, Yusuf

    2011-01-01

    This retrospective study evaluated the risk factors for persistent gestational trophoblastic disease (GTN) and determined their odds ratios. This study included 100 cases with GTN admitted to our clinic. Possible risk factors recorded were age, gravidity, parity, size of the neoplasia, and beta-human chorionic gonadotropin levels (beta-hCG) before and after the procedure. Statistical analyses consisted of the independent sample t-test and logistic regression using the statistical package SPSS ver. 15.0 for Windows (SPSS, Chicago, IL, USA). Twenty of the cases had persistent GTN, and the differences between these and the others cases were evaluated. The size of the neoplasia and histopathological type of GTN had no statistical relationship with persistence, whereas age, gravidity, and beta-hCG levels were significant risk factors for persistent GTN (p < 0.05). The odds ratios (95% confidence interval (CI)) for age, gravidity, and pre- and post-evacuation beta-hCG levels determined using logistic regression were 4.678 (0.97-22.44), 7.315 (1.16-46.16), 2.637 (1.41-4.94), and 2.339 (1.52-3.60), respectively. Patient age, gravidity, and beta-hCG levels were risk factors for persistent GTN, whereas the size of the neoplasia and histopathological type of GTN were not significant risk factors.

  6. A Poisson process approximation for generalized K-5 confidence regions

    NASA Technical Reports Server (NTRS)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  7. Modeling of light propagation in canine gingiva

    NASA Astrophysics Data System (ADS)

    Mrotek, Marcin

    2017-08-01

    This study is a preliminary evaluation of the effectivenes of laser-based surgery of maxillary and mandibular bone in dogs. Current methods of gingivial surgery in dogs require the use of general anaesthesia.1, 2 The proposed methods of laser surgery can be performed on conscious dogs, which substantially reduces the associated risks. Two choices of lasers, Nd:YAG and a 930 nm semiconductor lasers were evaluated. The former is already widely used in human laser surgery, while the latter provides an opportunity of decreasing the size of the optical setup. The results obtained from the simulations warrant further experiments with the evaluated wavelengths and animal tissue samples.

  8. Micrometeoroid and Orbital Debris Threat Assessment: Mars Sample Return Earth Entry Vehicle

    NASA Technical Reports Server (NTRS)

    Christiansen, Eric L.; Hyde, James L.; Bjorkman, Michael D.; Hoffman, Kevin D.; Lear, Dana M.; Prior, Thomas G.

    2011-01-01

    This report provides results of a Micrometeoroid and Orbital Debris (MMOD) risk assessment of the Mars Sample Return Earth Entry Vehicle (MSR EEV). The assessment was performed using standard risk assessment methodology illustrated in Figure 1-1. Central to the process is the Bumper risk assessment code (Figure 1-2), which calculates the critical penetration risk based on geometry, shielding configurations and flight parameters. The assessment process begins by building a finite element model (FEM) of the spacecraft, which defines the size and shape of the spacecraft as well as the locations of the various shielding configurations. This model is built using the NX I-deas software package from Siemens PLM Software. The FEM is constructed using triangular and quadrilateral elements that define the outer shell of the spacecraft. Bumper-II uses the model file to determine the geometry of the spacecraft for the analysis. The next step of the process is to identify the ballistic limit characteristics for the various shield types. These ballistic limits define the critical size particle that will penetrate a shield at a given impact angle and impact velocity. When the finite element model is built, each individual element is assigned a property identifier (PID) to act as an index for its shielding properties. Using the ballistic limit equations (BLEs) built into the Bumper-II code, the shield characteristics are defined for each and every PID in the model. The final stage of the analysis is to determine the probability of no penetration (PNP) on the spacecraft. This is done using the micrometeoroid and orbital debris environment definitions that are built into the Bumper-II code. These engineering models take into account orbit inclination, altitude, attitude and analysis date in order to predict an impacting particle flux on the spacecraft. Using the geometry and shielding characteristics previously defined for the spacecraft and combining that information with the environment model calculations, the Bumper-II code calculates a probability of no penetration for the spacecraft.

  9. Small and medium sized HDL particles are protectively associated with coronary calcification in a cross-sectional population-based sample.

    PubMed

    Ditah, Chobufo; Otvos, James; Nassar, Hisham; Shaham, Dorith; Sinnreich, Ronit; Kark, Jeremy D

    2016-08-01

    Failure of trials to observe benefits by elevating plasma high-density lipoprotein cholesterol (HDL-C) has raised serious doubts about HDL-C's atheroprotective properties. We aimed to identify protective HDL biomarkers by examining the association of nuclear magnetic resonance (NMR) measures of total HDL-particle (HDL-P), large HDL-particle, and small and medium-sized HDL-particle (MS-HDL-P) concentrations and average HDL-particle size with coronary artery calcification (CAC), which reflects the burden of coronary atherosclerosis, and compare with that of HDL-C. Using a cross-sectional design, 504 Jerusalem residents (274 Arabs and 230 Jews), recruited by population-based probability sampling, had HDL measured by NMR spectroscopy. CAC was determined by multidetector helical CT-scanning using Agatston scoring. Independent associations between the NMR measures and CAC (comparing scores ≥100 vs. <100) were assessed with multivariable binary logistic models. Comparing tertile 3 vs. tertile 1, we observed protective associations of HDL-P (multivariable-adjusted OR 0.42, 95% CI 0.22-0.79, plinear trend = 0.002) and MS-HDL-P (OR 0.36, 95% CI 0.19-0.69), plinear trend = 0.006 with CAC, which persisted after further adjustment for HDL-C. HDL-C was not significantly associated with CAC (multivariable-adjusted OR 0.59, 95% CI 0.27-1.29 for tertiles 3 vs. 1, plinear trend = 0.49). Large HDL-P and average particle size (which are highly correlated; r = 0.83) were not associated with CAC: large HDL-P (OR 0.77, 95% CI 0.33-1.83, plinear trend = 0.29) and average HDL-P size (OR 0.72, 95% CI 0.35-1.48, plinear trend = 0.58). MS-HDL-P represents a protective subpopulation of HDL particles. HDL-P and MS-HDL-P were more strongly associated with CAC than HDL-C. Based on the accumulating evidence, incorporation of MS-HDL-P or HDL-P into the routine prediction of CHD risk should be evaluated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Sibship structure and risk of infectious mononucleosis: a population-based cohort study.

    PubMed

    Rostgaard, Klaus; Nielsen, Trine Rasmussen; Wohlfahrt, Jan; Ullum, Henrik; Pedersen, Ole; Erikstrup, Christian; Nielsen, Lars Peter; Hjalgrim, Henrik

    2014-10-01

    Present understanding of increased risk of Epstein-Barr virus (EBV)-related infectious mononucleosis among children of low birth order or small sibships is mainly based on old and indirect evidence. Societal changes and methodological limitations of previous studies call for new data. We used data from the Danish Civil Registration System and the Danish National Hospital Discharge Register to study incidence rates of inpatient hospitalizations for infectious mononucleosis before the age of 20 years in a cohort of 2,543,225 Danes born between 1971 and 2008, taking individual sibship structure into account. A total of 12,872 cases of infectious mononucleosis were observed during 35.3 million person-years of follow-up. Statistical modelling showed that increasing sibship size was associated with a reduced risk of infectious mononucleosis and that younger siblings conferred more protection from infectious mononucleosis than older siblings. In addition to this general association with younger and older siblings, children aged less than 4 years transiently increased their siblings’ infectious mononucleosis risk. Our results were confirmed in an independent sample of blood donors followed up retrospectively for self-reported infectious mononucleosis. Younger siblings, and to a lesser degree older siblings, seem to be important in the transmission of EBV within families. Apparently the dogma of low birth order in a sibship as being at the highest risk of infectious mononucleosis is no longer valid.

  11. Machine learning derived risk prediction of anorexia nervosa.

    PubMed

    Guo, Yiran; Wei, Zhi; Keating, Brendan J; Hakonarson, Hakon

    2016-01-20

    Anorexia nervosa (AN) is a complex psychiatric disease with a moderate to strong genetic contribution. In addition to conventional genome wide association (GWA) studies, researchers have been using machine learning methods in conjunction with genomic data to predict risk of diseases in which genetics play an important role. In this study, we collected whole genome genotyping data on 3940 AN cases and 9266 controls from the Genetic Consortium for Anorexia Nervosa (GCAN), the Wellcome Trust Case Control Consortium 3 (WTCCC3), Price Foundation Collaborative Group and the Children's Hospital of Philadelphia (CHOP), and applied machine learning methods for predicting AN disease risk. The prediction performance is measured by area under the receiver operating characteristic curve (AUC), indicating how well the model distinguishes cases from unaffected control subjects. Logistic regression model with the lasso penalty technique generated an AUC of 0.693, while Support Vector Machines and Gradient Boosted Trees reached AUC's of 0.691 and 0.623, respectively. Using different sample sizes, our results suggest that larger datasets are required to optimize the machine learning models and achieve higher AUC values. To our knowledge, this is the first attempt to assess AN risk based on genome wide genotype level data. Future integration of genomic, environmental and family-based information is likely to improve the AN risk evaluation process, eventually benefitting AN patients and families in the clinical setting.

  12. Size and modal analyses of fines and ultrafines from some Apollo 17 samples

    NASA Technical Reports Server (NTRS)

    Greene, G. M.; King, D. T., Jr.; Banholzer, G. S., Jr.; King, E. A.

    1975-01-01

    Scanning electron and optical microscopy techniques have been used to determine the grain-size frequency distributions and morphology-based modal analyses of fine and ultrafine fractions of some Apollo 17 regolith samples. There are significant and large differences between the grain-size frequency distributions of the less than 10-micron size fraction of Apollo 17 samples, but there are no clear relations to the local geologic setting from which individual samples have been collected. This may be due to effective lateral mixing of regolith particles in this size range by micrometeoroid impacts. None of the properties of the frequency distributions support the idea of selective transport of any fine grain-size fraction, as has been proposed by other workers. All of the particle types found in the coarser size fractions also occur in the less than 10-micron particles. In the size range from 105 to 10 microns there is a strong tendency for the percentage of regularly shaped glass to increase as the graphic mean grain size of the less than 1-mm size fraction decreases, both probably being controlled by exposure age.

  13. Mosquito species composition and phenology (Diptera, Culicidae) in two German zoological gardens imply different risks of mosquito-borne pathogen transmission.

    PubMed

    Heym, Eva C; Kampen, Helge; Walther, Doreen

    2018-06-01

    Due to their large diversity of potential blood hosts, breeding habitats, and resting sites, zoological gardens represent highly interesting places to study mosquito ecology. In order to better assess the risk of mosquito-borne disease-agent transmission in zoos, potential vector species must be known, as well as the communities in which they occur. For this reason, species composition and dynamics were examined in 2016 in two zoological gardens in Germany. Using different methods for mosquito sampling, a total of 2,257 specimens belonging to 20 taxa were collected. Species spectra depended on the collection method but generally differed between the two zoos, while species compositions and relative abundances varied seasonally in both of them. As both sampled zoos were located in the same climatic region and potential breeding sites within the zoos were similar, the differences in mosquito compositions are attributed to immigration of specimens from surrounding landscapes, although the different sizes of the zoos and the different blood host populations available probably also have an impact. Based on the differences in species composition and the various biological characteristics of the species, the risk of certain pathogens to be transmitted must also be expected to differ between the zoos. © 2018 The Society for Vector Ecology.

  14. Nonparametric analysis of bivariate gap time with competing risks.

    PubMed

    Huang, Chiung-Yu; Wang, Chenguang; Wang, Mei-Cheng

    2016-09-01

    This article considers nonparametric methods for studying recurrent disease and death with competing risks. We first point out that comparisons based on the well-known cumulative incidence function can be confounded by different prevalence rates of the competing events, and that comparisons of the conditional distribution of the survival time given the failure event type are more relevant for investigating the prognosis of different patterns of recurrence disease. We then propose nonparametric estimators for the conditional cumulative incidence function as well as the conditional bivariate cumulative incidence function for the bivariate gap times, that is, the time to disease recurrence and the residual lifetime after recurrence. To quantify the association between the two gap times in the competing risks setting, a modified Kendall's tau statistic is proposed. The proposed estimators for the conditional bivariate cumulative incidence distribution and the association measure account for the induced dependent censoring for the second gap time. Uniform consistency and weak convergence of the proposed estimators are established. Hypothesis testing procedures for two-sample comparisons are discussed. Numerical simulation studies with practical sample sizes are conducted to evaluate the performance of the proposed nonparametric estimators and tests. An application to data from a pancreatic cancer study is presented to illustrate the methods developed in this article. © 2016, The International Biometric Society.

  15. High-risk human papilloma virus prevalence and its relation with abnormal cervical cytology among Turkish women.

    PubMed

    Özcan, E S; Taşkin, S; Ortaç, F

    2011-10-01

    In this study we aimed to investigate high-risk human papilloma virus (hrHPV) prevalence among Turkish women. Cervical samples were collected from 501 women for cytological screening and hrHPV testing by Digene Hybrid Capture 2. hrHPV prevalence and its relation with cytological results and epidemiologic data were analysed by SPSS. The prevalence of hrHPV was 4.2% (21 of the 501 women). Women with abnormal cytological screening results have significantly higher risk of hrHPV positivity compared with women with normal cytological results (19% vs 3.5%) (p ≤ 0.01). The incidence of HPV infection was only associated with the number of sexual partners, but there was no association with age, contraception methods or age at the first sexual intercourse. The prevalence of hrHPV among histological-confirmed cervical intraepithelial neoplasia (CIN) 1, CIN 2 and normal cases were found as 37.5%, 25% and 25%, respectively. The prevalence of cervical hrHPV infection is 4.2% in our population and this rate seems lower than reported rates from other regions. According to further studies with a larger sample size, reflex cytology based on hrHPV positivity should be considered for our national cervical cancer screening programme.

  16. Earthquake Hoax in Ghana: Exploration of the Cry Wolf Hypothesis

    PubMed Central

    Aikins, Moses; Binka, Fred

    2012-01-01

    This paper investigated the belief of the news of impending earthquake from any source in the context of the Cry Wolf hypothesis as well as the belief of the news of any other imminent disaster from any source. We were also interested in the correlation between preparedness, risk perception and antecedents. This explorative study consisted of interviews, literature and Internet reviews. Sampling was of a simple random nature. Stratification was carried out by sex and residence type. The sample size of (N=400), consisted of 195 males and 205 Females. Further stratification was based on residential classification used by the municipalities. The study revealed that a person would believe news of an impending earthquake from any source, (64.4%) and a model significance of (P=0.000). It also showed that a person would believe news of any other impending disaster from any source, (73.1%) and a significance of (P=0.003). There is association between background, risk perception and preparedness. Emergency preparedness is weak. Earthquake awareness needs to be re-enforced. There is a critical need for public education of earthquake preparedness. The authors recommend developing emergency response program for earthquakes, standard operating procedures for a national risk communication through all media including instant bulk messaging. PMID:28299086

  17. Accuracy and Cost-Effectiveness of Cervical Cancer Screening by High-Risk HPV DNA Testing of Self-Collected Vaginal Samples

    PubMed Central

    Balasubramanian, Akhila; Kulasingam, Shalini L.; Baer, Atar; Hughes, James P.; Myers, Evan R.; Mao, Constance; Kiviat, Nancy B.; Koutsky, Laura A.

    2010-01-01

    Objective Estimate the accuracy and cost-effectiveness of cervical cancer screening strategies based on high-risk HPV DNA testing of self-collected vaginal samples. Materials and Methods A subset of 1,665 women (18-50 years of age) participating in a cervical cancer screening study were screened by liquid-based cytology and by high-risk HPV DNA testing of both self-collected vaginal swab samples and clinician-collected cervical samples. Women with positive/abnormal screening test results and a subset of women with negative screening test results were triaged to colposcopy. Based on individual and combined test results, five screening strategies were defined. Estimates of sensitivity and specificity for cervical intraepithelial neoplasia grade 2 or worse were calculated and a Markov model was used to estimate the incremental cost-effectiveness ratios (ICERs) for each strategy. Results Compared to cytology-based screening, high-risk HPV DNA testing of self-collected vaginal samples was more sensitive (68%, 95%CI=58%-78% versus 85%, 95%CI=76%-94%) but less specific (89%, 95%CI=86%-91% versus 73%, 95%CI=67%-79%). A strategy of high-risk HPV DNA testing of self-collected vaginal samples followed by cytology triage of HPV positive women, was comparably sensitive (75%, 95%CI=64%-86%) and specific (88%, 95%CI=85%-92%) to cytology-based screening. In-home self-collection for high-risk HPV DNA detection followed by in-clinic cytology triage had a slightly lower lifetime cost and a slightly higher quality-adjusted life expectancy than did cytology-based screening (ICER of triennial screening compared to no screening was $9,871/QALY and $12,878/QALY, respectively). Conclusions Triennial screening by high-risk HPV DNA testing of in-home, self-collected vaginal samples followed by in-clinic cytology triage was cost-effective. PMID:20592553

  18. Nonsyndromic cleft palate: An association study at GWAS candidate loci in a multiethnic sample.

    PubMed

    Ishorst, Nina; Francheschelli, Paola; Böhmer, Anne C; Khan, Mohammad Faisal J; Heilmann-Heimbach, Stefanie; Fricker, Nadine; Little, Julian; Steegers-Theunissen, Regine P M; Peterlin, Borut; Nowak, Stefanie; Martini, Markus; Kruse, Teresa; Dunsche, Anton; Kreusch, Thomas; Gölz, Lina; Aldhorae, Khalid; Halboub, Esam; Reutter, Heiko; Mossey, Peter; Nöthen, Markus M; Rubini, Michele; Ludwig, Kerstin U; Knapp, Michael; Mangold, Elisabeth

    2018-06-01

    Nonsyndromic cleft palate only (nsCPO) is a common and multifactorial form of orofacial clefting. In contrast to successes achieved for the other common form of orofacial clefting, that is, nonsyndromic cleft lip with/without cleft palate (nsCL/P), genome wide association studies (GWAS) of nsCPO have identified only one genome wide significant locus. Aim of the present study was to investigate whether common variants contribute to nsCPO and, if so, to identify novel risk loci. We genotyped 33 SNPs at 27 candidate loci from 2 previously published nsCPO GWAS in an independent multiethnic sample. It included: (i) a family-based sample of European ancestry (n = 212); and (ii) two case/control samples of Central European (n = 94/339) and Arabian ancestry (n = 38/231), respectively. A separate association analysis was performed for each genotyped dataset, and meta-analyses were performed. After association analysis and meta-analyses, none of the 33 SNPs showed genome-wide significance. Two variants showed nominally significant association in the imputed GWAS dataset and exhibited a further decrease in p-value in a European and an overall meta-analysis including imputed GWAS data, respectively (rs395572: P MetaEU  = 3.16 × 10 -4 ; rs6809420: P MetaAll  = 2.80 × 10 -4 ). Our findings suggest that there is a limited contribution of common variants to nsCPO. However, the individual effect sizes might be too small for detection of further associations in the present sample sizes. Rare variants may play a more substantial role in nsCPO than in nsCL/P, for which GWAS of smaller sample sizes have identified genome-wide significant loci. Whole-exome/genome sequencing studies of nsCPO are now warranted. © 2018 Wiley Periodicals, Inc.

  19. Adolescent precursors of adult borderline personality pathology in a high-risk community sample.

    PubMed

    Conway, Christopher C; Hammen, Constance; Brennan, Patricia A

    2015-06-01

    Longitudinal studies of the exact environmental conditions and personal attributes contributing to the development of borderline personality disorder (BPD) are rare. Furthermore, existing research typically examines risk factors in isolation, limiting our knowledge of the relative effect sizes of different risk factors and how they act in concert to bring about borderline personality pathology. The present study investigated the prospective effects of diverse acute and chronic stressors, proband psychopathology, and maternal psychopathology on BPD features in a high-risk community sample (N = 700) of youth followed from mid-adolescence to young adulthood. Multivariate analyses revealed significant effects of maternal externalizing disorder history, offspring internalizing disorder history, family stressors, and school-related stressors on BPD risk. Contrary to expectations, no interactions between chronically stressful environmental conditions and personal characteristics in predicting borderline personality features were detected. Implications of these findings for etiological theories of BPD and early screening efforts are discussed.

  20. Does the duration and time of sleep increase the risk of allergic rhinitis? Results of the 6-year nationwide Korea youth risk behavior web-based survey.

    PubMed

    Kwon, Jeoung A; Lee, Minjee; Yoo, Ki-Bong; Park, Eun-Cheol

    2013-01-01

    Allergic rhinitis (AR) is the most common chronic disorder in the pediatric population. Although several studies have investigated the correlation between AR and sleep-related issues, the association between the duration and time of sleep and AR has not been analyzed in long-term national data. This study investigated the relationship between sleep time and duration and AR risk in middle- and high-school students (adolescents aged 12-18). We analyzed national data from the Korea Youth Risk Behavior Web-based Survey by the Korea Centers for Disease Control and Prevention from 2007-2012. The sample size was 274,480, with an average response rate of 96.2%. Multivariate logistic regression analyses were conducted to determine the relationship between sleep and AR risk. Furthermore, to determine the best-fitted model among independent variables such as sleep duration, sleep time, and the combination of sleep duration and sleep time, we used Akaike Information Criteria (AIC) to compare models. A total of 43,337 boys and 41,665 girls reported a diagnosis of AR at baseline. The odds ratio increased with age and with higher education and economic status of the parents. Further, students in mid-sized and large cities had stronger relationships to AR than those in small cities. In both genders, AR was associated with depression and suicidal ideation. In the analysis of sleep duration and sleep time, the odds ratio increased in both genders when sleep duration was <7 hours, and when the time of sleep was later than 24:00 hours. Our results indicate an association between sleep time and duration and AR. This study is the first to focus on the relationship between sleep duration and time and AR in national survey data collected over 6 years.

  1. Evidence for Masturbation and Prostate Cancer Risk: Do We Have a Verdict?

    PubMed

    Aboul-Enein, Basil H; Bernstein, Joshua; Ross, Michael W

    2016-07-01

    Prostate cancer (PCa) is one of the leading causes of cancer death in men and remains one of the most diagnosed malignancies worldwide. Ongoing public health efforts continue to promote protective factors, such as diet, physical activity, and other lifestyle modifications, against PCa development. Masturbation is a nearly universal safe sexual activity that transcends societal boundaries and geography yet continues to be met with stigma and controversy in contemporary society. Although previous studies have examined associations between sexual activity and PCa risk, anecdotal relations have been suggested regarding masturbation practice and PCa risk. To provide a summary of the published literature and examine the contemporary evidence for relations between masturbation practice and PCa risk. A survey of the current literature using seven academic electronic databases was conducted using search terms and key words associated with masturbation practice and PCa risk. The practice of masturbation and its relation to PCa risk. The literature search identified study samples (n = 16) published before October 2015. Sample inclusions varied by study type, sample size, and primary objective. Protective relations (n = 7) between ejaculation through masturbation and PCa risk were reported by 44% of the study sample. Age range emerged as a significant variable in the relation between masturbation and PCa. Findings included relations among masturbation, ejaculation frequency, and age range as individual factors of PCa risk. No universally accepted themes were identified across the study sample. Throughout the sample, there was insufficient agreement in survey design and data reporting. Potential avenues for new research include frequency of ejaculation and age range as covarying factors that could lead to more definitive statements about masturbation practice and PCa risk. Copyright © 2016 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.

  2. Examining the Efficacy of HIV Risk-Reduction Counseling on the Sexual Risk Behaviors of a National Sample of Drug Abuse Treatment Clients: Analysis of Subgroups

    PubMed Central

    Metsch, Lisa R.; Pereyra, Margaret R.; Malotte, C. Kevin; Haynes, Louise F.; Douaihy, Antoine; Chally, Jack; Mandler, Raul N.; Feaster, Daniel J.

    2016-01-01

    HIV counseling with testing has been part of HIV prevention in the U.S. since the 1980s. Despite the long-standing history of HIV testing with prevention counseling, the CDC released HIV testing recommendations for health care settings contesting benefits of prevention counseling with testing in reducing sexual risk behaviors among HIV-negatives in 2006. Efficacy of brief HIV risk-reduction counseling (RRC) in decreasing sexual risk among subgroups of substance use treatment clients was examined using multisite RCT data. Interaction tests between RRC and subgroups were performed; multivariable regression evaluated the relationship between RRC (with rapid testing) and sex risk. Subgroups were defined by demographics, risk type and level, attitudes/perceptions, and behavioral history. There was an effect (p < .0028) of counseling on number of sex partners among some subgroups. Certain subgroups may benefit from HIV RRC; this should be examined in studies with larger sample sizes, designed to assess the specific subgroup(s). PMID:26837631

  3. Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed

    NASA Astrophysics Data System (ADS)

    Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi

    2010-05-01

    To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.

  4. The local environment of ice particles in arctic mixed-phase clouds

    NASA Astrophysics Data System (ADS)

    Schlenczek, Oliver; Fugal, Jacob P.; Schledewitz, Waldemar; Borrmann, Stephan

    2015-04-01

    During the RACEPAC field campaign in April and May 2014, research flights were made with the Polar 5 and Polar 6 aircraft from the Alfred Wegener Institute in Arctic clouds near Inuvik, Northwest Territories, Canada. One flight with the Polar 6 aircraft, done on May 16, 2014, flew under precipitating, stratiform, mid-level clouds with several penetrations through cloud base. Measurements with HALOHolo, an airborne digital in-line holographic instrument for cloud particles, show ice particles in a field of other cloud particles in a local three-dimensional sample volume (~14x19x130 mm3 or ~35 cm^3). Each holographic sample volume is a snapshot of a 3-dimensional piece of cloud at the cm-scale with typically thousands of cloud droplets per sample volume, so each sample volume yields a statistically significant droplet size distribution. Holograms are recorded at a rate of six times per second, which provides one volume sample approx. every 12 meters along the flight path. The size resolution limit for cloud droplets is better than 1 µm due to advanced sizing algorithms. Shown are preliminary results of, (1) the ice/liquid water partitioning at the cloud base and the distribution of water droplets around each ice particle, and (2) spatial and temporal variability of the cloud droplet size distributions at cloud base.

  5. An efficient sampling strategy for selection of biobank samples using risk scores.

    PubMed

    Björk, Jonas; Malmqvist, Ebba; Rylander, Lars; Rignell-Hydbom, Anna

    2017-07-01

    The aim of this study was to suggest a new sample-selection strategy based on risk scores in case-control studies with biobank data. An ongoing Swedish case-control study on fetal exposure to endocrine disruptors and overweight in early childhood was used as the empirical example. Cases were defined as children with a body mass index (BMI) ⩾18 kg/m 2 ( n=545) at four years of age, and controls as children with a BMI of ⩽17 kg/m 2 ( n=4472 available). The risk of being overweight was modelled using logistic regression based on available covariates from the health examination and prior to selecting samples from the biobank. A risk score was estimated for each child and categorised as low (0-5%), medium (6-13%) or high (⩾14%) risk of being overweight. The final risk-score model, with smoking during pregnancy ( p=0.001), birth weight ( p<0.001), BMI of both parents ( p<0.001 for both), type of residence ( p=0.04) and economic situation ( p=0.12), yielded an area under the receiver operating characteristic curve of 67% ( n=3945 with complete data). The case group ( n=416) had the following risk-score profile: low (12%), medium (46%) and high risk (43%). Twice as many controls were selected from each risk group, with further matching on sex. Computer simulations showed that the proposed selection strategy with stratification on risk scores yielded consistent improvements in statistical precision. Using risk scores based on available survey or register data as a basis for sample selection may improve possibilities to study heterogeneity of exposure effects in biobank-based studies.

  6. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    PubMed

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  7. Effect of resin infiltration on the thermal and mechanical properties of nano-sized silica-based thermal insulation.

    PubMed

    Lee, Jae Chun; Kim, Yun-Il; Lee, Dong-Hun; Kim, Won-Jun; Park, Sung; Lee, Dong Bok

    2011-08-01

    Several kinds of nano-sized silica-based thermal insulation were prepared by dry processing of mixtures consisting of fumed silica, ceramic fiber, and a SiC opacifier. Infiltration of phenolic resin solution into the insulation, followed by hot-pressing, was attempted to improve the mechanical strength of the insulation. More than 22% resin content was necessary to increase the strength of the insulation by a factor of two or more. The structural integrity of the resin-infiltrated samples could be maintained, even after resin burn-out, presumably due to reinforcement from ceramic fibers. For all temperature ranges and similar sample bulk density values, the thermal conductivities of the samples after resin burn-out were consistently higher than those of the samples obtained from the dry process. Mercury intrusion curves indicated that the median size of the nanopores formed by primary silica aggregates in the samples after resin burn-out is consistently larger than that of the sample without resin infiltration.

  8. Body size in early life and risk of breast cancer.

    PubMed

    Shawon, Md Shajedur Rahman; Eriksson, Mikael; Li, Jingmei

    2017-07-21

    Body size in early life is inversely associated with adult breast cancer (BC) risk, but it is unclear whether the associations differ by tumor characteristics. In a pooled analysis of two Swedish population-based studies consisting of 6731 invasive BC cases and 28,705 age-matched cancer-free controls, we examined the associations between body size in early life and BC risk. Self-reported body sizes at ages 7 and 18 years were collected by a validated nine-level pictogram (aggregated into three categories: small, medium and large). Odds ratios (OR) and corresponding 95% confidence intervals (CI) were estimated from multivariable logistic regression models in case-control analyses, adjusting for study, age at diagnosis, age at menarche, number of children, hormone replacement therapy, and family history of BC. Body size change between ages 7 and 18 were also examined in relation to BC risk. Case-only analyses were performed to test whether the associations differed by tumor characteristics. Medium or large body size at age 7 and 18 was associated with a statistically significant decreased BC risk compared to small body size (pooled OR (95% CI): comparing large to small, 0.78 (0.70-0.86), P trend <0.001 and 0.72 (0.64-0.80), P trend <0.001, respectively). The majority of the women (~85%) did not change body size categories between age 7 and 18 . Women who remained medium or large between ages 7 and 18 had significantly decreased BC risk compared to those who remained small. A reduction in body size between ages 7 and 18 was also found to be inversely associated with BC risk (0.90 (0.81-1.00)). No significant association was found between body size at age 7 and tumor characteristics. Body size at age 18 was found to be inversely associated with tumor size (P trend  = 0.006), but not estrogen receptor status and lymph node involvement. For all analyses, the overall inferences did not change appreciably after further adjustment for adult body mass index. Our data provide further support for a strong and independent inverse relationship between early life body size and BC risk. The association between body size at age 18 and tumor size could be mediated by mammographic density.

  9. Inventory implications of using sampling variances in estimation of growth model coefficients

    Treesearch

    Albert R. Stage; William R. Wykoff

    2000-01-01

    Variables based on stand densities or stocking have sampling errors that depend on the relation of tree size to plot size and on the spatial structure of the population, ignoring the sampling errors of such variables, which include most measures of competition used in both distance-dependent and distance-independent growth models, can bias the predictions obtained from...

  10. Occupation and thyroid cancer: a population-based case-control study in Connecticut

    PubMed Central

    Ba, Yue; Huang, Huang; Lerro, Catherine C.; Li, Shuzhen; Zhao, Nan; Li, Anqi; Ma, Shuangge; Udelsman, Robert; Zhang, Yawei

    2016-01-01

    Objective The study aims to explore the associations between various occupations and thyroid cancer risk. Methods A population-based case-control study involving 462 histologically confirmed incident cases and 498 controls was conducted in Connecticut in 2010–2011. Results A significantly increased risk of thyroid cancer, particularly papillary microcarcinoma, was observed for those working as the healthcare practitioners and technical workers, health diagnosing and treating practitioners and registered nurses. Those working in building and grounds cleaning, maintenance occupations, pest control, retail sales, and customer service also had increased risk for papillary thyroid cancer. Subjects who worked as cooks, janitors, cleaners, and customer service representatives were at an increased risk of papillary thyroid cancer with tumor size >1 cm. Conclusions Certain occupations were associated with an increased risk of thyroid cancer, with some tumor size and subtype specificity. PMID:26949881

  11. Mindfulness-based stress reduction for treating chronic headache: A systematic review and meta-analysis.

    PubMed

    Anheyer, Dennis; Leach, Matthew J; Klose, Petra; Dobos, Gustav; Cramer, Holger

    2018-01-01

    Background Mindfulness-based stress reduction/cognitive therapy are frequently used for pain-related conditions, but their effects on headache remain uncertain. This review aimed to assess the efficacy and safety of mindfulness-based stress reduction/cognitive therapy in reducing the symptoms of chronic headache. Data sources and study selection MEDLINE/PubMed, Scopus, CENTRAL, and PsychINFO were searched to 16 June 2017. Randomized controlled trials comparing mindfulness-based stress reduction/cognitive therapy with usual care or active comparators for migraine and/or tension-type headache, which assessed headache frequency, duration or intensity as a primary outcome, were eligible for inclusion. Risk of bias was assessed using the Cochrane Tool. Results Five randomized controlled trials (two on tension-type headache; one on migraine; two with mixed samples) with a total of 185 participants were included. Compared to usual care, mindfulness-based stress reduction/cognitive therapy did not improve headache frequency (three randomized controlled trials; standardized mean difference = 0.00; 95% confidence interval = -0.33,0.32) or headache duration (three randomized controlled trials; standardized mean difference = -0.08; 95% confidence interval = -1.03,0.87). Similarly, no significant difference between groups was found for pain intensity (five randomized controlled trials; standardized mean difference = -0.78; 95% confidence interval = -1.72,0.16). Conclusions Due to the low number, small scale and often high or unclear risk of bias of included randomized controlled trials, the results are imprecise; this may be consistent with either an important or negligible effect. Therefore, more rigorous trials with larger sample sizes are needed.

  12. Modeling the transport of engineered nanoparticles in saturated porous media - an experimental setup

    NASA Astrophysics Data System (ADS)

    Braun, A.; Neukum, C.; Azzam, R.

    2011-12-01

    The accelerating production and application of engineered nanoparticles is causing concerns regarding their release and fate in the environment. For assessing the risk that is posed to drinking water resources it is important to understand the transport and retention mechanisms of engineered nanoparticles in soil and groundwater. In this study an experimental setup for analyzing the mobility of silver and titanium dioxide nanoparticles in saturated porous media is presented. Batch and column experiments with glass beads and two different soils as matrices are carried out under varied conditions to study the impact of electrolyte concentration and pore water velocities. The analysis of nanoparticles implies several challenges, such as the detection and characterization and the preparation of a well dispersed sample with defined properties, as nanoparticles tend to form agglomerates when suspended in an aqueous medium. The analytical part of the experiments is mainly undertaken with Flow Field-Flow Fractionation (FlFFF). This chromatography like technique separates a particulate sample according to size. It is coupled to a UV/Vis and a light scattering detector for analyzing concentration and size distribution of the sample. The advantage of this technique is the ability to analyze also complex environmental samples, such as the effluent of column experiments including soil components, and the gentle sample treatment. For optimization of the sample preparation and for getting a first idea of the aggregation behavior in soil solutions, in sedimentation experiments the effect of ionic strength, sample concentration and addition of a surfactant on particle or aggregate size and temporal dispersion stability was investigated. In general the samples are more stable the lower the concentration of particles is. For TiO2 nanoparticles, the addition of a surfactant yielded the most stable samples with smallest aggregate sizes. Furthermore the suspension stability is increasing with electrolyte concentration. Depending on the dispersing medium the results show that TiO2 nanoparticles tend to form aggregates between 100-200 nm in diameter while the primary particle size is given as 21 nm by the manufacturer. Aggregate sizes are increasing with time. The particle size distribution of the silver nanoparticle samples is quite uniform in each medium. The fresh samples show aggregate sizes between 40 and 45 nm while the primary particle size is 15 nm according to the manufacturer. Aggregate size is only slightly increasing with time during the sedimentation experiments. These results are used as a reference when analyzing the effluent of column experiments.

  13. Detectable signals of episodic risk effects on acute HIV transmission: Strategies for analyzing transmission systems using genetic data

    PubMed Central

    Alam, Shah Jamal; Zhang, Xinyu; Romero-Severson, Ethan Obie; Henry, Christopher; Zhong, Lin; Volz, Erik M.; Brenner, Bluma G.; Koopman, James S.

    2013-01-01

    Episodic high-risk sexual behavior is common and can have a profound effect on HIV transmission. In a model of HIV transmission among men who have sex with men (MSM), changing the frequency, duration and contact rates of high-risk episodes can take endemic prevalence from zero to 50% and more than double transmissions during acute HIV infection (AHI). Undirected test and treat could be inefficient in the presence of strong episodic risk effects. Partner services approaches that use a variety of control options will be likely to have better effects under these conditions, but the question remains: What data will reveal if a population is experiencing episodic risk effects? HIV sequence data from Montreal reveals genetic clusters whose size distribution stabilizes over time and reflects the size distribution of acute infection outbreaks (AIOs). Surveillance provides complementary behavioral data. In order to use both types of data efficiently, it is essential to examine aspects of models that affect both the episodic risk effects and the shape of transmission trees. As a demonstration, we use a deterministic compartmental model of episodic risk to explore the determinants of the fraction of transmissions during acute HIV infection (AHI) at the endemic equilibrium. We use a corresponding individual-based model to observe AIO size distributions and patterns of transmission within AIO. Episodic risk parameters determining whether AHI transmission trees had longer chains, more clustered transmissions from single individuals, or different mixes of these were explored. Encouragingly for parameter estimation, AIO size distributions reflected the frequency of transmissions from acute infection across divergent parameter sets. Our results show that episodic risk dynamics influence both the size and duration of acute infection outbreaks, thus providing a possible link between genetic cluster size distributions and episodic risk dynamics. PMID:23438430

  14. Who is at risk for long-term sickness absence? A prospective cohort study of Danish employees.

    PubMed

    Lund, Thomas; Labriola, Merete; Villadsen, Ebbe

    2007-01-01

    The aim of this study was to identify who is at risk for long-term sickness absence according to occupation, gender, education, age, business sector, agency size and ownership. The study is based on a sample of 5357 employees aged 18-69, interviewed in 2000. The cohort was followed up in a national register from January 1st 2001 to June 30th 2003, to identify cases with sickness absences that exceeded 8 weeks. During follow-up 486 persons (9.1%) experienced one or more periods of absence that exceeded 8 weeks. Higher risk of long-term sickness absence was associated with gender, age, educational level, and the municipal employment sector. Kindergarten teachers and people employed in day care, health care, janitorial work, food preparation, and unskilled workers were at greatest risk. Managers, computer professionals, technicians and designers, and professionals had lower risks. The health care and social service sectors were also in the high risk category, whereas the private administration sector had a lower risk. The study identifies specific occupational target populations and documents the need to perform job-specific research and tailor interventions if the intended policy of decreasing long-term sickness absence within the Danish labour market is to be realized.

  15. Sampling design for the Study of Cardiovascular Risks in Adolescents (ERICA).

    PubMed

    Vasconcellos, Mauricio Teixeira Leite de; Silva, Pedro Luis do Nascimento; Szklo, Moyses; Kuschnir, Maria Cristina Caetano; Klein, Carlos Henrique; Abreu, Gabriela de Azevedo; Barufaldi, Laura Augusta; Bloch, Katia Vergetti

    2015-05-01

    The Study of Cardiovascular Risk in Adolescents (ERICA) aims to estimate the prevalence of cardiovascular risk factors and metabolic syndrome in adolescents (12-17 years) enrolled in public and private schools of the 273 municipalities with over 100,000 inhabitants in Brazil. The study population was stratified into 32 geographical strata (27 capitals and five sets with other municipalities in each macro-region of the country) and a sample of 1,251 schools was selected with probability proportional to size. In each school three combinations of shift (morning and afternoon) and grade were selected, and within each of these combinations, one class was selected. All eligible students in the selected classes were included in the study. The design sampling weights were calculated by the product of the reciprocals of the inclusion probabilities in each sampling stage, and were later calibrated considering the projections of the numbers of adolescents enrolled in schools located in the geographical strata by sex and age.

  16. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    PubMed

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  17. [Prevalence of high blood pressure in children and adolescents from the city of Maceió, Brazil].

    PubMed

    Moura, Adriana A; Silva, Maria A M; Ferraz, Maria R M T; Rivera, Ivan R

    2004-01-01

    To define the prevalence of high blood pressure in a representative sample of children and adolescents from the city of Maceió, state of Alagoas, Brazil, and to investigate the association of high blood pressure with age, sex and nutritional status. This cross-sectional study was carried out from May 2000 to September 2002. Individuals between 7 and 17 years of age were selected among all the 185,702 students from public and private schools. The size of the sample was defined based on the expected prevalence of hypertension for the age group. After randomization, data were collected through a questionnaire. Blood pressure was measured twice. Weight and height were also measured. High blood pressure was defined as systolic and/or diastolic blood pressure over the 95th percentile in one or in both measures. The final sample included 1,253 students (706 females). One hundred and eighteen students had high blood pressure (mean age 13 years; 44% males). Risk of being overweight and excess weight were identified, respectively, in 9.3 and 4.5% of the students. These variables were significantly associated with high blood pressure. The prevalence of high blood pressure was 9.4%. High blood pressure was significantly more frequent among overweight students and among those at risk for being overweight.

  18. Arsenic, Lead, and Cadmium in U.S. Mushrooms and Substrate in Relation to Dietary Exposure.

    PubMed

    Seyfferth, Angelia L; McClatchy, Colleen; Paukett, Michelle

    2016-09-06

    Wild mushrooms can absorb high quantities of metal(loid)s, yet the concentration, speciation, and localization of As, Pb, and Cd in cultivated mushrooms, particularly in the United States, are unresolved. We collected 40 samples of 12 types of raw mushrooms from 2 geographic locations that produce the majority of marketable U.S. mushrooms and analyzed the total As, Pb, and Cd content, the speciation and localization of As in select samples, and assessed the metal sources and substrate-to-fruit transfer at one representative farm. Cremini mushrooms contained significantly higher total As concentrations than Shiitake and localized the As differently; while As in Cremini was distributed throughout the fruiting body, it was localized to the hymenophore region in Shiitake. Cd was significantly higher in Royal Trumpet than in White Button, Cremini, and Portobello, while no difference was observed in Pb levels among the mushrooms. Concentrations of As, Pb, and Cd were less than 1 μg g(-1) d.w. in all mushroom samples, and the overall risk of As, Cd, and Pb intake from mushroom consumption is low in the U.S. However, higher percentages of tolerable intake levels are observed when calculating risk based on single serving-sizes or when substrate contains elevated levels of metal(loid)s.

  19. Orphan therapies: making best use of postmarket data.

    PubMed

    Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling

    2014-08-01

    Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.

  20. Ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction

    PubMed Central

    Friedman, Matt

    2009-01-01

    Despite the attention focused on mass extinction events in the fossil record, patterns of extinction in the dominant group of marine vertebrates—fishes—remain largely unexplored. Here, I demonstrate ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction, based on a genus-level dataset that accounts for lineages predicted on the basis of phylogeny but not yet sampled in the fossil record. Two ecologically relevant anatomical features are considered: body size and jaw-closing lever ratio. Extinction intensity is higher for taxa with large body sizes and jaws consistent with speed (rather than force) transmission; resampling tests indicate that victims represent a nonrandom subset of taxa present in the final stage of the Cretaceous. Logistic regressions of the raw data reveal that this nonrandom distribution stems primarily from the larger body sizes of victims relative to survivors. Jaw mechanics are also a significant factor for most dataset partitions but are always less important than body size. When data are corrected for phylogenetic nonindependence, jaw mechanics show a significant correlation with extinction risk, but body size does not. Many modern large-bodied, predatory taxa currently suffering from overexploitation, such billfishes and tunas, first occur in the Paleocene, when they appear to have filled the functional space vacated by some extinction victims. PMID:19276106

  1. Ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction.

    PubMed

    Friedman, Matt

    2009-03-31

    Despite the attention focused on mass extinction events in the fossil record, patterns of extinction in the dominant group of marine vertebrates-fishes-remain largely unexplored. Here, I demonstrate ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction, based on a genus-level dataset that accounts for lineages predicted on the basis of phylogeny but not yet sampled in the fossil record. Two ecologically relevant anatomical features are considered: body size and jaw-closing lever ratio. Extinction intensity is higher for taxa with large body sizes and jaws consistent with speed (rather than force) transmission; resampling tests indicate that victims represent a nonrandom subset of taxa present in the final stage of the Cretaceous. Logistic regressions of the raw data reveal that this nonrandom distribution stems primarily from the larger body sizes of victims relative to survivors. Jaw mechanics are also a significant factor for most dataset partitions but are always less important than body size. When data are corrected for phylogenetic nonindependence, jaw mechanics show a significant correlation with extinction risk, but body size does not. Many modern large-bodied, predatory taxa currently suffering from overexploitation, such billfishes and tunas, first occur in the Paleocene, when they appear to have filled the functional space vacated by some extinction victims.

  2. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  3. Survey of Salmonella contamination in chicken layer farms in three Caribbean countries.

    PubMed

    Adesiyun, Abiodun; Webb, Lloyd; Musai, Lisa; Louison, Bowen; Joseph, George; Stewart-Johnson, Alva; Samlal, Sannandan; Rodrigo, Shelly

    2014-09-01

    This study was conducted to investigate the demography, management, and production practices on layer chicken farms in Trinidad and Tobago, Grenada, and St. Lucia and the frequency of risk factors for Salmonella infection. The frequency of isolation of Salmonella from the layer farm environment, eggs, feeds, hatchery, and imported day-old chicks was determined using standard methods. Of the eight risk factors (farm size, age group of layers, source of day-old chicks, vaccination, sanitation practices, biosecurity measures, presence of pests, and previous disease outbreaks) for Salmonella infection investigated, farm size was the only risk factor significantly associated (P = 0.031) with the prevalence of Salmonella; 77.8% of large farms were positive for this pathogen compared with 33.3 and 26.1% of medium and small farms, respectively. The overall isolation rate of Salmonella from 35 layer farms was 40.0%. Salmonella was isolated at a significantly higher rate (P < 0.05) from farm environments than from the cloacae. Only in Trinidad and Tobago did feeds (6.5% of samples) and pooled egg contents (12.5% of samples) yield Salmonella; however, all egg samples from hotels, hatcheries, and airports in this country were negative. Salmonella Anatum, Salmonella group C, and Salmonella Kentucky were the predominant serotypes in Trinidad and Tobago, Grenada, and St. Lucia, respectively. Although Salmonella infections were found in layer birds sampled, table eggs appear to pose minimal risk to consumers. However, the detection of Salmonella -contaminated farm environments and feeds cannot be ignored. Only 2.9% of the isolates belonged to Salmonella Enteritidis, a finding that may reflect the impact of changes in farm management and poultry production in the region.

  4. Revisiting sample size: are big trials the answer?

    PubMed

    Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J

    2012-07-18

    The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.

  5. Genome-Wide Association Studies of a Broad Spectrum of Antisocial Behavior.

    PubMed

    Tielbeek, Jorim J; Johansson, Ada; Polderman, Tinca J C; Rautiainen, Marja-Riitta; Jansen, Philip; Taylor, Michelle; Tong, Xiaoran; Lu, Qing; Burt, Alexandra S; Tiemeier, Henning; Viding, Essi; Plomin, Robert; Martin, Nicholas G; Heath, Andrew C; Madden, Pamela A F; Montgomery, Grant; Beaver, Kevin M; Waldman, Irwin; Gelernter, Joel; Kranzler, Henry R; Farrer, Lindsay A; Perry, John R B; Munafò, Marcus; LoParo, Devon; Paunio, Tiina; Tiihonen, Jari; Mous, Sabine E; Pappa, Irene; de Leeuw, Christiaan; Watanabe, Kyoko; Hammerschlag, Anke R; Salvatore, Jessica E; Aliev, Fazil; Bigdeli, Tim B; Dick, Danielle; Faraone, Stephen V; Popma, Arne; Medland, Sarah E; Posthuma, Danielle

    2017-12-01

    Antisocial behavior (ASB) places a large burden on perpetrators, survivors, and society. Twin studies indicate that half of the variation in this trait is genetic. Specific causal genetic variants have, however, not been identified. To estimate the single-nucleotide polymorphism-based heritability of ASB; to identify novel genetic risk variants, genes, or biological pathways; to test for pleiotropic associations with other psychiatric traits; and to reevaluate the candidate gene era data through the Broad Antisocial Behavior Consortium. Genome-wide association data from 5 large population-based cohorts and 3 target samples with genome-wide genotype and ASB data were used for meta-analysis from March 1, 2014, to May 1, 2016. All data sets used quantitative phenotypes, except for the Finnish Crime Study, which applied a case-control design (370 patients and 5850 control individuals). This study adopted relatively broad inclusion criteria to achieve a quantitative measure of ASB derived from multiple measures, maximizing the sample size over different age ranges. The discovery samples comprised 16 400 individuals, whereas the target samples consisted of 9381 individuals (all individuals were of European descent), including child and adult samples (mean age range, 6.7-56.1 years). Three promising loci with sex-discordant associations were found (8535 female individuals, chromosome 1: rs2764450, chromosome 11: rs11215217; 7772 male individuals, chromosome X, rs41456347). Polygenic risk score analyses showed prognostication of antisocial phenotypes in an independent Finnish Crime Study (2536 male individuals and 3684 female individuals) and shared genetic origin with conduct problems in a population-based sample (394 male individuals and 431 female individuals) but not with conduct disorder in a substance-dependent sample (950 male individuals and 1386 female individuals) (R2 = 0.0017 in the most optimal model, P = 0.03). Significant inverse genetic correlation of ASB with educational attainment (r = -0.52, P = .005) was detected. The Broad Antisocial Behavior Consortium entails the largest collaboration to date on the genetic architecture of ASB, and the first results suggest that ASB may be highly polygenic and has potential heterogeneous genetic effects across sex.

  6. Development and evaluation of a water level proportional water sampler

    NASA Astrophysics Data System (ADS)

    Schneider, P.; Lange, A.; Doppler, T.

    2013-12-01

    We developed and adapted a new type of sampler for time-integrated, water level proportional water quality sampling (e.g. nutrients, contaminants and stable isotopes). Our samplers are designed for sampling small to mid-size streams based on the law of Hagen-Poiseuille, where a capillary (or a valve) limits the sampling aliquot by reducing the air flux out of a submersed plastic (HDPE) sampling container. They are good alternatives to battery-operated automated water samplers when working in remote areas, or at streams that are characterized by pronounced daily discharge variations such as glacier streams. We evaluated our samplers against standard automated water samplers (ISCO 2900 and ISCO 6712) during the snowmelt in the Black Forest and the Alps and tested them in remote glacial catchments in Iceland, Switzerland and Kyrgyzstan. The results clearly showed that our samplers are an adequate tool for time-integrated, water level proportional water sampling at remote test sites, as they do not need batteries, are relatively inexpensive, lightweight, and compact. They are well suited for headwater streams - especially when sampling for stable isotopes - as the sampled water is perfectly protected against evaporation. Moreover, our samplers have a reduced risk of icing in cold environments, as they are installed submersed in water, whereas automated samplers (typically installed outside the stream) may get clogged due to icing of hoses. Based on this study, we find these samplers to be an adequate replacement for automated samplers when time-integrated sampling or solute load estimates are the main monitoring tasks.

  7. Retention of Ancestral Genetic Variation Across Life-Stages of an Endangered, Long-Lived Iteroparous Fish.

    PubMed

    Carson, Evan W; Turner, Thomas F; Saltzgiver, Melody J; Adams, Deborah; Kesner, Brian R; Marsh, Paul C; Pilger, Tyler J; Dowling, Thomas E

    2016-11-01

    As with many endangered, long-lived iteroparous fishes, survival of razorback sucker depends on a management strategy that circumvents recruitment failure that results from predation by non-native fishes. In Lake Mohave, AZ-NV, management of razorback sucker centers on capture of larvae spawned in the lake, rearing them in off-channel habitats, and subsequent release ("repatriation") to the lake when adults are sufficiently large to resist predation. The effects of this strategy on genetic diversity, however, remained uncertain. After correction for differences in sample size among groups, metrics of mitochondrial DNA (mtDNA; number of haplotypes, N H , and haplotype diversity, H D ) and microsatellite (number of alleles, N A , and expected heterozygosity, H E ) diversity did not differ significantly between annual samples of repatriated adults and larval year-classes or among pooled samples of repatriated adults, larvae, and wild fish. These findings indicate that the current management program thus far maintained historical genetic variation of razorback sucker in the lake. Because effective population size, N e , is closely tied to the small census population size (N c = ~1500-3000) of razorback sucker in Lake Mohave, this population will remain at risk from genetic, as well as demographic risk of extinction unless N c is increased substantially. © The American Genetic Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Total and settling velocity-fractionated pollution potential of sewer sediments in Jiaxing, China.

    PubMed

    Zhou, Yongchao; Zhang, Ping; Zhang, Yiping; Li, Jin; Zhang, Tuqiao; Yu, Tingchao

    2017-10-01

    Sewer sediments and their associated contaminant released along with wet-weather discharges pose potential pollution risks to environment. This paper presents total characteristics of sediments collected from Jiaxing, China. Size distribution and concentrations of volatile solids (VS) and four metals (Pb, Cu, Zn, Cr) of sediment samples from seven land use categories were analyzed. Then, the sediment samples were graded five fractions according to its settling velocity through the custom-built settling velocity-grading device. Sediment mass and pollution load distribution based on settling velocity were also assessed. The results show that there are relatively high level of heavy metal load in the sediment of separated storm drainage systems in Jiaxing, especially for the catchment of residential area (RA), road of developed area (RDA), and industrial area (IA). Although grain size follows a trend of increasing along with settling velocity, the methods of settling velocity grading are meaningful for stormwater treatment facilities with precipitation. For all land use categories, the pollution concentrations of the three lower settling velocity-fractionated sediment are relatively consistent and higher than others. Combined with mass distribution, the pollution percentage of fraction with different velocities for seven land use categories were also evaluated. Based on it, the statistical conclusion of design target settling velocity to different pollution load removal rates are drawn, which is helpful to guide design of on-site precipitation separation facilities.

  9. Risk of tuberculosis in a large sample of patients with coeliac disease--a nationwide cohort study.

    PubMed

    Ludvigsson, J F; Sanders, D S; Maeurer, M; Jonsson, J; Grunewald, J; Wahlström, J

    2011-03-01

    Research suggests a positive association between coeliac disease and tuberculosis (TB), but that research has often been limited to in-patients and small sample size. We examined the relationship between TB and coeliac disease. To examine the association of TB and coeliac disease. We collected biopsy data from all pathology departments in Sweden (n=28) to identify individuals who were diagnosed with coeliac disease between 1969 and 2007 (Marsh 3: villous atrophy; n=29,026 unique individuals). Population-based sex- and age-matched controls were selected from the Total Population Register. Using Cox regression, we calculated hazard ratios (HRs) for TB from data in the Swedish national health registers. Individuals with coeliac disease were at increased risk of TB (HR=2.0; 95% CI=1.3-3.0) (during follow-up, 31 individuals with coeliac disease and 74 reference individuals had a diagnosis of TB). The absolute risk of TB in patients with coeliac disease was 10/100,000 person-years with an excess risk of 5/100,000. Risk estimates were the highest in the first year. Restricting our outcome to a diagnosis of TB confirmed by (I) a record of TB medication (HR=2.9; 95% CI=1.0-8.3), (II) data in the National Surveillance System for Infectious Diseases in Sweden (HR=2.6; 95% CI=1.3-5.2) or (III) positive TB cultivation (HR=3.3; 95% CI=1.6-6.8) increased risk estimates. The positive association between coeliac disease and TB was also observed before the coeliac disease diagnosis (odds ratio=1.6; 95% CI=1.2-2.1). We found a moderately increased risk of tuberculosis in patients with coeliac disease. © 2011 Blackwell Publishing Ltd.

  10. 76 FR 28786 - Proposed Data Collections Submitted for Public Comment and Recommendations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    .... The sample size is based on recommendations related to qualitative interview methods and the research... than 10 employees (CPWR, 2007), and this establishment size experiences the highest fatality rate... out occupational safety and health training. This interview will be administered to a sample of...

  11. 76 FR 44590 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-26

    ... health training. This interview will be administered to a sample of approximately 30 owners of construction businesses with 10 or fewer employees from the Greater Cincinnati area. The sample size is based... size experiences the highest fatality rate within construction (U.S. Dept. of Labor, 2008). The need...

  12. Stoffenmanager exposure model: company-specific exposure assessments using a Bayesian methodology.

    PubMed

    van de Ven, Peter; Fransman, Wouter; Schinkel, Jody; Rubingh, Carina; Warren, Nicholas; Tielemans, Erik

    2010-04-01

    The web-based tool "Stoffenmanager" was initially developed to assist small- and medium-sized enterprises in the Netherlands to make qualitative risk assessments and to provide advice on control at the workplace. The tool uses a mechanistic model to arrive at a "Stoffenmanager score" for exposure. In a recent study it was shown that variability in exposure measurements given a certain Stoffenmanager score is still substantial. This article discusses an extension to the tool that uses a Bayesian methodology for quantitative workplace/scenario-specific exposure assessment. This methodology allows for real exposure data observed in the company of interest to be combined with the prior estimate (based on the Stoffenmanager model). The output of the tool is a company-specific assessment of exposure levels for a scenario for which data is available. The Bayesian approach provides a transparent way of synthesizing different types of information and is especially preferred in situations where available data is sparse, as is often the case in small- and medium sized-enterprises. Real-world examples as well as simulation studies were used to assess how different parameters such as sample size, difference between prior and data, uncertainty in prior, and variance in the data affect the eventual posterior distribution of a Bayesian exposure assessment.

  13. B cells gone rogue: the intersection of diffuse large B cell lymphoma and autoimmune disease.

    PubMed

    Koff, Jean L; Flowers, Christopher R

    2016-06-01

    Diffuse large B cell lymphoma (DLBCL) is characterized by genetic, genomic and clinical heterogeneity. Autoimmune diseases (AIDs) have recently been shown to represent significant risk factors for development of DLBCL. Studies that examined the relationships between AIDs and lymphoma in terms of pathogenesis, genetic lesions, and treatment were identified in the MEDLINE database using combinations of medical subject heading (MeSH) terms. Co-authors independently performed study selection for inclusion based on appropriateness of the study question and nature of the study design and sample size. Expert commentary: Identification of AID as a substantial risk factor for DLBCL raises new questions regarding how autoimmunity influences lymphomagenesis and disease behavior. It will be important to identify whether DLBCL cases arising in the setting of AID harbor inferior prognoses, and, if so, whether they also exhibit certain molecular abnormalities that may be targeted to overcome such a gap in clinical outcomes.

  14. GRADE guidelines: 5. Rating the quality of evidence--publication bias.

    PubMed

    Guyatt, Gordon H; Oxman, Andrew D; Montori, Victor; Vist, Gunn; Kunz, Regina; Brozek, Jan; Alonso-Coello, Pablo; Djulbegovic, Ben; Atkins, David; Falck-Ytter, Yngve; Williams, John W; Meerpohl, Joerg; Norris, Susan L; Akl, Elie A; Schünemann, Holger J

    2011-12-01

    In the GRADE approach, randomized trials start as high-quality evidence and observational studies as low-quality evidence, but both can be rated down if a body of evidence is associated with a high risk of publication bias. Even when individual studies included in best-evidence summaries have a low risk of bias, publication bias can result in substantial overestimates of effect. Authors should suspect publication bias when available evidence comes from a number of small studies, most of which have been commercially funded. A number of approaches based on examination of the pattern of data are available to help assess publication bias. The most popular of these is the funnel plot; all, however, have substantial limitations. Publication bias is likely frequent, and caution in the face of early results, particularly with small sample size and number of events, is warranted. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Estimation and applications of size-based distributions in forestry

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Size-based distributions arise in several contexts in forestry and ecology. Simple power relationships (e.g., basal area and diameter at breast height) between variables are one such area of interest arising from a modeling perspective. Another, probability proportional to size sampline (PPS), is found in the most widely used methods for sampling standing or dead and...

  16. Power/Sample Size Calculations for Assessing Correlates of Risk in Clinical Efficacy Trials

    PubMed Central

    Gilbert, Peter B.; Janes, Holly E.; Huang, Yunda

    2016-01-01

    In a randomized controlled clinical trial that assesses treatment efficacy, a common objective is to assess the association of a measured biomarker response endpoint with the primary study endpoint in the active treatment group, using a case-cohort, case-control, or two-phase sampling design. Methods for power and sample size calculations for such biomarker association analyses typically do not account for the level of treatment efficacy, precluding interpretation of the biomarker association results in terms of biomarker effect modification of treatment efficacy, with detriment that the power calculations may tacitly and inadvertently assume that the treatment harms some study participants. We develop power and sample size methods accounting for this issue, and the methods also account for inter-individual variability of the biomarker that is not biologically relevant (e.g., due to technical measurement error). We focus on a binary study endpoint and on a biomarker subject to measurement error that is normally distributed or categorical with two or three levels. We illustrate the methods with preventive HIV vaccine efficacy trials, and include an R package implementing the methods. PMID:27037797

  17. Brain Tumor Epidemiology: Consensus from the Brain Tumor Epidemiology Consortium (BTEC)

    PubMed Central

    Bondy, Melissa L.; Scheurer, Michael E.; Malmer, Beatrice; Barnholtz-Sloan, Jill S.; Davis, Faith G.; Il’yasova, Dora; Kruchko, Carol; McCarthy, Bridget J.; Rajaraman, Preetha; Schwartzbaum, Judith A.; Sadetzki, Siegal; Schlehofer, Brigitte; Tihan, Tarik; Wiemels, Joseph L.; Wrensch, Margaret; Buffler, Patricia A.

    2010-01-01

    Epidemiologists in the Brain Tumor Epidemiology Consortium (BTEC) have prioritized areas for further research. Although many risk factors have been examined over the past several decades, there are few consistent findings possibly due to small sample sizes in individual studies and differences between studies in subjects, tumor types, and methods of classification. Individual studies have generally lacked sufficient sample size to examine interactions. A major priority based on available evidence and technologies includes expanding research in genetics and molecular epidemiology of brain tumors. BTEC has taken an active role in promoting understudied groups such as pediatric brain tumors, the etiology of rare glioma subtypes, such as oligodendroglioma, and meningioma, which not uncommon, has only recently been systematically registered in the US. There is also a pressing need to bring more researchers, especially junior investigators, to study brain tumor epidemiology. However, relatively poor funding for brain tumor research has made it difficult to encourage careers in this area. We review the group’s consensus on the current state of scientific findings and present a consensus on research priorities to identify the important areas the science should move to address. PMID:18798534

  18. Diet- and Body Size-Related Attitudes and Behaviors Associated with Vitamin Supplement Use in a Representative Sample of Fourth-Grade Students in Texas

    ERIC Educational Resources Information Center

    George, Goldy C.; Hoelscher, Deanna M.; Nicklas, Theresa A.; Kelder, Steven H.

    2009-01-01

    Objective: To examine diet- and body size-related attitudes and behaviors associated with supplement use in a representative sample of fourth-grade students in Texas. Design: Cross-sectional data from the School Physical Activity and Nutrition study, a probability-based sample of schoolchildren. Children completed a questionnaire that assessed…

  19. Effect of sample area and sieve size on benthic macrofaunal community condition assessments in California enclosed bays and estuaries.

    PubMed

    Hammerstrom, Kamille K; Ranasinghe, J Ananda; Weisberg, Stephen B; Oliver, John S; Fairey, W Russell; Slattery, Peter N; Oakden, James M

    2012-10-01

    Benthic macrofauna are used extensively for environmental assessment, but the area sampled and sieve sizes used to capture animals often differ among studies. Here, we sampled 80 sites using 3 different sized sampling areas (0.1, 0.05, 0.0071 m(2)) and sieved those sediments through each of 2 screen sizes (0.5, 1 mm) to evaluate their effect on number of individuals, number of species, dominance, nonmetric multidimensional scaling (MDS) ordination, and benthic community condition indices that are used to assess sediment quality in California. Sample area had little effect on abundance but substantially affected numbers of species, which are not easily scaled to a standard area. Sieve size had a substantial effect on both measures, with the 1-mm screen capturing only 74% of the species and 68% of the individuals collected in the 0.5-mm screen. These differences, though, had little effect on the ability to differentiate samples along gradients in ordination space. Benthic indices generally ranked sample condition in the same order regardless of gear, although the absolute scoring of condition was affected by gear type. The largest differences in condition assessment were observed for the 0.0071-m(2) gear. Benthic indices based on numbers of species were more affected than those based on relative abundance, primarily because we were unable to scale species number to a common area as we did for abundance. Copyright © 2010 SETAC.

  20. Sample size requirements for the design of reliability studies: precision consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    In multilevel modeling, the intraclass correlation coefficient based on the one-way random-effects model is routinely employed to measure the reliability or degree of resemblance among group members. To facilitate the advocated practice of reporting confidence intervals in future reliability studies, this article presents exact sample size procedures for precise interval estimation of the intraclass correlation coefficient under various allocation and cost structures. Although the suggested approaches do not admit explicit sample size formulas and require special algorithms for carrying out iterative computations, they are more accurate than the closed-form formulas constructed from large-sample approximations with respect to the expected width and assurance probability criteria. This investigation notes the deficiency of existing methods and expands the sample size methodology for the design of reliability studies that have not previously been discussed in the literature.

  1. Field substitution of nonresponders can maintain sample size and structure without altering survey estimates-the experience of the Italian behavioral risk factors surveillance system (PASSI).

    PubMed

    Baldissera, Sandro; Ferrante, Gianluigi; Quarchioni, Elisa; Minardi, Valentina; Possenti, Valentina; Carrozzi, Giuliano; Masocco, Maria; Salmaso, Stefania

    2014-04-01

    Field substitution of nonrespondents can be used to maintain the planned sample size and structure in surveys but may introduce additional bias. Sample weighting is suggested as the preferable alternative; however, limited empirical evidence exists comparing the two methods. We wanted to assess the impact of substitution on surveillance results using data from Progressi delle Aziende Sanitarie per la Salute in Italia-Progress by Local Health Units towards a Healthier Italy (PASSI). PASSI is conducted by Local Health Units (LHUs) through telephone interviews of stratified random samples of residents. Nonrespondents are replaced with substitutes randomly preselected in the same LHU stratum. We compared the weighted estimates obtained in the original PASSI sample (used as a reference) and in the substitutes' sample. The differences were evaluated using a Wald test. In 2011, 50,697 units were selected: 37,252 were from the original sample and 13,445 were substitutes; 37,162 persons were interviewed. The initially planned size and demographic composition were restored. No significant differences in the estimates between the original and the substitutes' sample were found. In our experience, field substitution is an acceptable method for dealing with nonresponse, maintaining the characteristics of the original sample without affecting the results. This evidence can support appropriate decisions about planning and implementing a surveillance system. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  3. Clinical Indicators of Child Development in the Capitals of Nine Brazilian States: The Influence of Regional Cultural Factors

    PubMed Central

    de Carvalho, André Laranjeira; da Silva, Luiz Fernando Ferraz; Grisi, Sandra Josefina Ferraz Ellero; de Ulhôa Escobar, Ana Maria

    2008-01-01

    OBJECTIVE Evaluating the interaction between mother or caregiver and infant through the Clinical Indicators of Risks in Infant Development and investigating whether local and cultural influences during infant development affect these clinical indicators. INTRODUCTION The Clinical Indicators of Risks in Infant Development was created in order to fully assess infants’ development and the subjective relationship between the babies and their caregivers. The absence of two or more Clinical Indicators of Risks in Infant Developments suggests a possibly inadequate mental development. Given the continental size of Brazil and its accentuated cultural differences, one might question how trustworthy these indicators can be when applied to each of the geographical regions of the country. METHODS This was a cross-sectional study with 737 infants from the capitals of 9 Brazilian states. The size of the initial sample population was based on a pilot study carried out in the cities of São Paulo and Brasília. The ages of children were grouped: 0–3 months, 4–7 months, 8–11 months and 12–18 months. The chi-square test was used together with analyses by the statistical software SPSS 13.0. RESULTS Statistical analysis of results from the different municipalities against the total sample did not reveal any statistically significant differences. Municipalities represented were Belém (p=0.486), Brasília (p=0.371), Porto Alegre (p=0.987), Fortaleza (p=0.259), Recife (p=0.630), Salvador (0.370), São Paulo (p=0.238), Curitiba (p=0.870), and Rio de Janeiro (p= 0.06). DISCUSSION Care for mental development should be considered a public health issue. Its evaluation and follow-up should be part of the already available mother-child assistance programs, which would then be considered to provide “full” care to children. CONCLUSIONS Local habits and culture did not affect the results of the Clinical Indicators of Risks in Infant Development indicators. Clinical Indicators of Risks in Infant Development proved to be robust despite the specificities of each region. PMID:18297207

  4. Are current UK National Institute for Health and Clinical Excellence (NICE) obesity risk guidelines useful? Cross-sectional associations with cardiovascular disease risk factors in a large, representative English population.

    PubMed

    Tabassum, Faiza; Batty, G David

    2013-01-01

    The National Institute for Health and Clinical Excellence (NICE) has recently released obesity guidelines for health risk. For the first time in the UK, we estimate the utility of these guidelines by relating them to the established cardiovascular disease (CVD) risk factors. Health Survey for England (HSE) 2006, a population-based cross-sectional study in England was used with a sample size of 7225 men and women aged ≥35 years (age range: 35-97 years). The following CVD risk factor outcomes were used: hypertension, diabetes, total and high density lipoprotein cholesterol, glycated haemoglobin, fibrinogen, C-reactive protein and Framingham risk score. Four NICE categories of obesity were created based on body mass index (BMI) and waist circumference (WC): no risk (up to normal BMI and low/high WC); increased risk (normal BMI & very high WC, or obese & low WC); high risk (overweight & very high WC, or obese & high WC); and very high risk (obese I & very high WC or obese II/III with any levels of WC. Men and women in the very high risk category had the highest odds ratios (OR) of having unfavourable CVD risk factors compared to those in the no risk category. For example, the OR of having hypertension for those in the very high risk category of the NICE obesity groupings was 2.57 (95% confidence interval 2.06 to 3.21) in men, and 2.15 (1.75 to 2.64) in women. Moreover, a dose-response association between the adiposity groups and most of the CVD risk factors was observed except total cholesterol in men and low HDL in women. Similar results were apparent when the Framingham risk score was the outcome of interest. In conclusion, the current NICE definitions of obesity show utility for a range of CVD risk factors and CVD risk in both men and women.

  5. Are Current UK National Institute for Health and Clinical Excellence (NICE) Obesity Risk Guidelines Useful? Cross-Sectional Associations with Cardiovascular Disease Risk Factors in a Large, Representative English Population

    PubMed Central

    Tabassum, Faiza; Batty, G. David

    2013-01-01

    The National Institute for Health and Clinical Excellence (NICE) has recently released obesity guidelines for health risk. For the first time in the UK, we estimate the utility of these guidelines by relating them to the established cardiovascular disease (CVD) risk factors. Health Survey for England (HSE) 2006, a population-based cross-sectional study in England was used with a sample size of 7225 men and women aged ≥35 years (age range: 35–97 years). The following CVD risk factor outcomes were used: hypertension, diabetes, total and high density lipoprotein cholesterol, glycated haemoglobin, fibrinogen, C-reactive protein and Framingham risk score. Four NICE categories of obesity were created based on body mass index (BMI) and waist circumference (WC): no risk (up to normal BMI and low/high WC); increased risk (normal BMI & very high WC, or obese & low WC); high risk (overweight & very high WC, or obese & high WC); and very high risk (obese I & very high WC or obese II/III with any levels of WC. Men and women in the very high risk category had the highest odds ratios (OR) of having unfavourable CVD risk factors compared to those in the no risk category. For example, the OR of having hypertension for those in the very high risk category of the NICE obesity groupings was 2.57 (95% confidence interval 2.06 to 3.21) in men, and 2.15 (1.75 to 2.64) in women. Moreover, a dose-response association between the adiposity groups and most of the CVD risk factors was observed except total cholesterol in men and low HDL in women. Similar results were apparent when the Framingham risk score was the outcome of interest. In conclusion, the current NICE definitions of obesity show utility for a range of CVD risk factors and CVD risk in both men and women. PMID:23844088

  6. Endogenous estrogens and breast cancer risk: the case for prospective cohort studies.

    PubMed Central

    Toniolo, P G

    1997-01-01

    It is generally agreed that estrogens, and possibly androgens, are important in the etiology of breast cancer, but no consensus exists as to the precise estrogenic or androgenic environment that characterizes risk, or the exogenous factors that influence the hormonal milieu. Nearly all the epidemiological studies conducted in the 1970s and 1980s were hospital-based case-control studies in which specimen sampling was performed well after the clinical appearance of the disease. Early prospective cohort studies also had limitations in their small sample sizes or short follow-up periods. However, more recent case-control studies nested within large cohorts, such as the New York University Women's Health Study and the Ormoni e Dieta nell'Eziologia dei Tumori study in Italy, are generating new data indicating that increased levels of estrone, estradiol and bioavailable estradiol, as well as their androgenic precursors, may be associated with a 4- to 6-fold increase in the risk of postmenopausal breast cancer. Further new evidence, which complements and expands the observations from the latter studies, shows that women with the thickest bone density, which may be a surrogate for cumulated exposure to hormones, experience severalfold increased risk of subsequent breast cancer as compared to women with thin bones. These data suggests that endogenous sex hormones are a key factor in the etiology of postmenopausal breast cancer. New prospective cohort studies should be conducted to examine the role of endogenous sex hormones in blood and urine samples obtained early in the natural history of breast cancer jointly with an assessment of bone density and of other important risk factors, such as mammographic density, physical activity, body weight, and markers of individual susceptibility, which may confer increased risk through an effect on the metabolism of endogenous hormones or through specific metabolic responses to Western lifestyle and diet. PMID:9168000

  7. Do scientists and fishermen collect the same size fish? Possible implications for exposure assessment.

    PubMed

    Burger, Joanna; Gochfeld, Michael; Burke, Sean; Jeitner, Christian W; Jewett, Stephen; Snigaroff, Daniel; Snigaroff, Ronald; Stamm, Tim; Harper, Shawn; Hoberg, Max; Chenelot, Heloise; Patrick, Robert; Volz, Conrad D; Weston, James

    2006-05-01

    Recreational and subsistence fishing plays a major role in the lives of many people, although most Americans obtain their fish from supermarkets or other commercial sources. Fish consumption has generally increased in recent years, largely because of the nutritional benefits. Recent concerns about contaminants in fish have prompted federal and state agencies to analyze fish (especially freshwater fish targeted by recreational anglers) for contaminants, such as mercury and polychlorinated biphenyls (PCBs), and to issue fish consumption advisories to help reduce the public health risks, where warranted. Scientists engaged in environmental sampling collect fish by a variety of means, and analyze the contaminants in those fish. Risk assessors use these levels as the basis for their advisories. Two assumptions of this methodology are that scientists collect the same size (and types) of fish that fishermen catch, and that, for some contaminants (such as methylmercury and PCBs), levels increase with the size and age of the fish. While many studies demonstrate a positive relationship between size and mercury levels in a wide range of different species of fish, the assumption that scientists collect the same size fish as fishermen has not been examined. The assumption that scientists collect the same size fish as those caught (and eaten) by recreationalists or subsistence fishermen is extremely important because contaminant levels are different in different size fish. In this article, we test the null hypothesis that there are no differences in the sizes of fish collected by Aleut fishermen, scientists (including divers), and commercial trawlers in the Bering Sea from Adak to Kiska. Aleut fishermen caught fish using rod-and-reel (fishing rods, hook, and fresh bait) from boats, as they would in their Aleutian villages. The scientists collected fish using rod-and-reel, as well as by scuba divers using spears up to 90 ft depths. A fisheries biologist collected fish from a research/commercial trawler operated under charter to the National Oceanographic and Atmospheric Administration (NOAA). The fish selected for sampling, including those caught commercially in the Bering Sea, represented different trophic levels, and are species regularly caught by Aleuts while fishing near their villages. Not all fish were caught by all three groups. There were no significant differences in length and weight for five species of fish caught by Aleuts, scientists, and fisheries trawls, and for an additional 3 species caught only by the Aleut and scientist teams. There were small, but significant, differences in the sizes of rock greenling (Hexagrammos lagocephalus) and red Irish lord (Hemilepidotus hemilepidotus) caught by the scientist and Aleut fishermen. No scientists caught rock greenling using poles; those speared by the divers were significantly smaller than those caught by the Aleuts. Further, there were no differences in the percent of males in the samples as a function of fishing method or type of fishermen, except for rockfish and red Irish lord. These data suggest that if scientists collect fish in the same manner as subsistence fishermen (in this case, using fishing rods from boats), they can collect the same-sized fish. The implications for exposure and risk assessment are that scientists should either engage subsistence and recreational fishermen to collect fish for analysis, or mimic their fishing methods to ensure that the fish collected are similar in size and weight to those being caught and consumed by these groups. Further, total length, standard length, and weight were highly correlated for all species of fish, suggesting that risk assessors could rely on recreational and commercial fishermen to measure total lengths for the purpose of correlating mercury levels with known size/mercury level relationships. Our data generally demonstrate that the scientists and trawlers can collect the same size fish as those caught by Aleuts, making contaminant analysis, and subsequent contaminant analysis, representative of the risks to fish consumers.

  8. Toward Advancing Nano-Object Count Metrology: A Best Practice Framework

    PubMed Central

    Boyko, Volodymyr; Meyers, Greg; Voetz, Matthias; Wohlleben, Wendel

    2013-01-01

    Background: A movement among international agencies and policy makers to classify industrial materials by their number content of sub–100-nm particles could have broad implications for the development of sustainable nanotechnologies. Objectives: Here we highlight current particle size metrology challenges faced by the chemical industry due to these emerging number percent content thresholds, provide a suggested best-practice framework for nano-object identification, and identify research needs as a path forward. Discussion: Harmonized methods for identifying nanomaterials by size and count for many real-world samples do not currently exist. Although particle size remains the sole discriminating factor for classifying a material as “nano,” inconsistencies in size metrology will continue to confound policy and decision making. Moreover, there are concerns that the casting of a wide net with still-unproven metrology methods may stifle the development and judicious implementation of sustainable nanotechnologies. Based on the current state of the art, we propose a tiered approach for evaluating materials. To enable future risk-based refinements of these emerging definitions, we recommend that this framework also be considered in environmental and human health research involving the implications of nanomaterials. Conclusion: Substantial scientific scrutiny is needed in the area of nanomaterial metrology to establish best practices and to develop suitable methods before implementing definitions based solely on number percent nano-object content for regulatory purposes. Strong cooperation between industry, academia, and research institutions will be required to fully develop and implement detailed frameworks for nanomaterial identification with respect to emerging count-based metrics. Citation: Brown SC, Boyko V, Meyers G, Voetz M, Wohlleben W. 2013. Toward advancing nano-object count metrology: a best practice framework. Environ Health Perspect 121:1282–1291; http://dx.doi.org/10.1289/ehp.1306957 PMID:24076973

  9. Reaching and Supporting At-Risk Community Based Seniors: Results of a Multi-church Partnership.

    PubMed

    Ellis, Julie L; Morzinski, Jeffrey A

    2018-04-26

    The purpose of this study was to determine the impact of a nurse-led, church-based educational support group for "at-risk," older African Americans on hospitalization and emergency department use. Study nurses enrolled 81 "at-risk" older adult members of ten churches. Participants completed a trifold pamphlet identifying personal health information and support, and they attended eight monthly educational/support group sessions in their church during the 10-month intervention. Study nurses completed a risk assessment interview with each senior both pre- and post-participation. The study nurse completed post-program assessments with 64 seniors, a 79% retention rate. At the program's conclusion researchers conducted a focus group with the study RNs and used an anonymous written survey to gather participant appraisals of program elements. Neither hospitalization nor emergency department/urgent care usage was significantly different from pre- to post-program. Session attendance was moderate to high and over half of the seniors brought a family member or friend to one or more sessions. The majority of seniors initiated positive health changes (e.g., smoking cessation, weight loss, or diet changes). Participants expressed high satisfaction and expressed satisfaction to perceive that they were supporting other seniors in their community. We conclude that this intervention was successful in engaging and motivating seniors to initiate health behavior change and contributed to a health-supportive church-based community. To demonstrate a statistically significant difference in hospital and ED usage, however, a stronger intervention or a larger sample size is needed.

  10. [Stroke prevention in atrial fibrillation in Germany. Situational analysis of treatment reality based on retrospective data].

    PubMed

    Mergenthaler, Ulrike; Kostev, Karel; Moosmang, Sven; Thate-Waschke, Inga-Marion; Haas, Sylvia

    2017-12-01

    Guideline-based, risk-adjusted therapy with anticoagulants reduce thromboembolic stroke risk in patients with atrial fibrillation (AF). This study analyzed use of oral anticoagulation in German AF-patients. Access to anonymized patient records was made via IMS Health Disease Analyzer database (sample size: 113,619 patients with ICD-10 Code I48.-; observation period: 11/2010-10/2013). Results were subsequently extrapolated to all general practitioners' (GPs) and cardiological practices in Germany. In 2011 12-month AF-prevalence was extrapolated to 2.1 million patients (first diagnosed: n = 537.548). In 2012 AF-prevalence gone up to 2.2 million cases (first diagnosed: n = 537.548) and in 2013 to 2.8 million (first diagnosed: n = 636.571). Commonly prescribed oral anticoagulants (OAC) were vitamin K antagonists (VKA). Unstable INR setting, private health insurance, hospital admission, heart failure or hypertension increased probability of change from VKA to non-vitamin K antagonist oral anticoagulants (NOAC). 17.3-36.5% of patients with CHA 2 DS 2 -VASc-score ≥ 2 did not receive any thromboembolism prophylaxis; 38.5% with CHA 2 DS 2 -VASc-score = 0 received unnecessarily OACs. For 2013 a potential of 29.749 ischemic strokes in GP practices was calculated, which possibly can be avoided by thromboembolism prophylaxis according to guidelines. Risk-based anticoagulation showed requirements for optimization. Use of OACs, according to guideline recommendations, would minimize bleeding risks, reduce ischemic strokes and could release resources.

  11. Sampling methods, dispersion patterns, and fixed precision sequential sampling plans for western flower thrips (Thysanoptera: Thripidae) and cotton fleahoppers (Hemiptera: Miridae) in cotton.

    PubMed

    Parajulee, M N; Shrestha, R B; Leser, J F

    2006-04-01

    A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.

  12. Reliability of risk-adjusted outcomes for profiling hospital surgical quality.

    PubMed

    Krell, Robert W; Hozain, Ahmed; Kao, Lillian S; Dimick, Justin B

    2014-05-01

    Quality improvement platforms commonly use risk-adjusted morbidity and mortality to profile hospital performance. However, given small hospital caseloads and low event rates for some procedures, it is unclear whether these outcomes reliably reflect hospital performance. To determine the reliability of risk-adjusted morbidity and mortality for hospital performance profiling using clinical registry data. A retrospective cohort study was conducted using data from the American College of Surgeons National Surgical Quality Improvement Program, 2009. Participants included all patients (N = 55,466) who underwent colon resection, pancreatic resection, laparoscopic gastric bypass, ventral hernia repair, abdominal aortic aneurysm repair, and lower extremity bypass. Outcomes included risk-adjusted overall morbidity, severe morbidity, and mortality. We assessed reliability (0-1 scale: 0, completely unreliable; and 1, perfectly reliable) for all 3 outcomes. We also quantified the number of hospitals meeting minimum acceptable reliability thresholds (>0.70, good reliability; and >0.50, fair reliability) for each outcome. For overall morbidity, the most common outcome studied, the mean reliability depended on sample size (ie, how high the hospital caseload was) and the event rate (ie, how frequently the outcome occurred). For example, mean reliability for overall morbidity was low for abdominal aortic aneurysm repair (reliability, 0.29; sample size, 25 cases per year; and event rate, 18.3%). In contrast, mean reliability for overall morbidity was higher for colon resection (reliability, 0.61; sample size, 114 cases per year; and event rate, 26.8%). Colon resection (37.7% of hospitals), pancreatic resection (7.1% of hospitals), and laparoscopic gastric bypass (11.5% of hospitals) were the only procedures for which any hospitals met a reliability threshold of 0.70 for overall morbidity. Because severe morbidity and mortality are less frequent outcomes, their mean reliability was lower, and even fewer hospitals met the thresholds for minimum reliability. Most commonly reported outcome measures have low reliability for differentiating hospital performance. This is especially important for clinical registries that sample rather than collect 100% of cases, which can limit hospital case accrual. Eliminating sampling to achieve the highest possible caseloads, adjusting for reliability, and using advanced modeling strategies (eg, hierarchical modeling) are necessary for clinical registries to increase their benchmarking reliability.

  13. Cancer survival analysis using semi-supervised learning method based on Cox and AFT models with L1/2 regularization.

    PubMed

    Liang, Yong; Chai, Hua; Liu, Xiao-Ying; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak

    2016-03-01

    One of the most important objectives of the clinical cancer research is to diagnose cancer more accurately based on the patients' gene expression profiles. Both Cox proportional hazards model (Cox) and accelerated failure time model (AFT) have been widely adopted to the high risk and low risk classification or survival time prediction for the patients' clinical treatment. Nevertheless, two main dilemmas limit the accuracy of these prediction methods. One is that the small sample size and censored data remain a bottleneck for training robust and accurate Cox classification model. In addition to that, similar phenotype tumours and prognoses are actually completely different diseases at the genotype and molecular level. Thus, the utility of the AFT model for the survival time prediction is limited when such biological differences of the diseases have not been previously identified. To try to overcome these two main dilemmas, we proposed a novel semi-supervised learning method based on the Cox and AFT models to accurately predict the treatment risk and the survival time of the patients. Moreover, we adopted the efficient L1/2 regularization approach in the semi-supervised learning method to select the relevant genes, which are significantly associated with the disease. The results of the simulation experiments show that the semi-supervised learning model can significant improve the predictive performance of Cox and AFT models in survival analysis. The proposed procedures have been successfully applied to four real microarray gene expression and artificial evaluation datasets. The advantages of our proposed semi-supervised learning method include: 1) significantly increase the available training samples from censored data; 2) high capability for identifying the survival risk classes of patient in Cox model; 3) high predictive accuracy for patients' survival time in AFT model; 4) strong capability of the relevant biomarker selection. Consequently, our proposed semi-supervised learning model is one more appropriate tool for survival analysis in clinical cancer research.

  14. Age as a Risk Factor for Burnout Syndrome in Nursing Professionals: A Meta-Analytic Study.

    PubMed

    Gómez-Urquiza, José L; Vargas, Cristina; De la Fuente, Emilia I; Fernández-Castillo, Rafael; Cañadas-De la Fuente, Guillermo A

    2017-04-01

    Although past research has highlighted the possibility of a direct relationship between the age of nursing professionals and burnout syndrome, results have been far from conclusive. The aim of this study was to conduct a wider analysis of the influence of age on the three dimensions of burnout syndrome (emotional exhaustion, depersonalization, and personal accomplishment) in nurses. We performed a meta-analysis of 51 publications extracted from health sciences and psychology databases that fulfilled the inclusion criteria. There were 47 reports of information on emotional exhaustion in 50 samples, 39 reports on depersonalization for 42 samples, and 31 reports on personal accomplishment in 34 samples. The mean effect sizes indicated that younger age was a significant factor in the emotional exhaustion and depersonalization of nurses, although it was somewhat less influential in the dimension of personal accomplishment. Because of heterogeneity in the effect sizes, moderating variables that might explain the association between age and burnout were also analyzed. Gender, marital status, and study characteristics moderated the relationship between age and burnout and may be crucial for the identification of high-risk groups. More research is needed on other variables for which there were only a small number of studies. Identification of burnout risk factors will facilitate establishment of burnout prevention programs for nurses. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  15. Obsolete pesticide storage sites and their POP release into the environment--an Armenian case study.

    PubMed

    Dvorská, A; Sír, M; Honzajková, Z; Komprda, J; Cupr, P; Petrlík, J; Anakhasyan, E; Simonyan, L; Kubal, M

    2012-07-01

    Organochlorinated pesticides were widely applied in Armenia until the 1980s, like in all former Soviet Union republics. Subsequently, the problem of areas contaminated by organochlorinated pesticides emerged. Environmental, waste and food samples at one pesticide burial site (Nubarashen) and three former pesticide storage sites (Jrarat, Echmiadzin and Masis) were taken and analysed on the content of organochlorinated pesticides, polychlorinated dibenzo-p-dioxins and furans and dioxin-like polychlorinated biphenyls. Gradient sampling and diffusivity-based calculations provided information on the contamination release from the hot spots on a local scale. A risk analysis based on samples of locally produced food items characterised the impact of storage sites on the health of nearby residents. All four sites were found to be seriously contaminated. High pesticide levels and soil and air contamination gradients of several orders of magnitude were confirmed outside the fence of the Nubarashen burial site, confirming pesticide release. A storage in Jrarat, which was completely demolished in 1996 and contained numerous damaged bags with pure pesticides until 2011, was found to have polluted surrounding soils by wind dispersion of pesticide powders and air by significant evaporation of lindane and β-endosulfan during this period. Dichlorodiphenyltrichloroethane-contaminated eggs, sampled from hens roaming freely in the immediate surroundings of the Echmiadzin storage site, revealed a significant health risk for egg consumers above 1E-5. Although small in size and previously almost unknown to the public, storage sites like Echmiadzin, Masis and Jrarat were found to stock considerable amounts of obsolete pesticides and have a significant negative influence on the environment and human health. Multi-stakeholder cooperation proved to be successful in identifying such sites suspected to be significant sources of persistent organic pollutants.

  16. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    PubMed

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  17. Birth order and Risk of Childhood Cancer: A Pooled Analysis from Five U.S. States

    PubMed Central

    Von Behren, Julie; Spector, Logan G.; Mueller, Beth A.; Carozza, Susan E.; Chow, Eric J.; Fox, Erin E.; Horel, Scott; Johnson, Kimberly J.; McLaughlin, Colleen; Puumala, Susan E.; Ross, Julie A.; Reynolds, Peggy

    2010-01-01

    The causes of childhood cancers are largely unknown. Birth order has been used as a proxy for prenatal and postnatal exposures, such as frequency of infections and in utero hormone exposures. We investigated the association between birth order and childhood cancers in a pooled case-control dataset. The subjects were drawn from population-based registries of cancers and births in California, Minnesota, New York, Texas, and Washington. We included 17,672 cases less than 15 years of age who were diagnosed from1980-2004 and 57,966 randomly selected controls born 1970-2004, excluding children with Down syndrome. We calculated odds ratios and 95% confidence intervals using logistic regression, adjusted for sex, birth year, maternal race, maternal age, multiple birth, gestational age, and birth weight. Overall, we found an inverse relationship between childhood cancer risk and birth order. For children in the fourth or higher birth order category compared to first-born children, the adjusted OR was 0.87 (95% CI: 0.81, 0.93) for all cancers combined. When we examined risks by cancer type, a decreasing risk with increasing birth order was seen in the central nervous system (CNS) tumors, neuroblastoma, bilateral retinoblastoma, Wilms tumor, and rhabdomyosarcoma. We observed increased risks with increasing birth order for acute myeloid leukemia but a slight decrease in risk for acute lymphoid leukemia. These risk estimates were based on a very large sample size which allowed us to examine rare cancer types with greater statistical power than in most previous studies, however the biologic mechanisms remain to be elucidated. PMID:20715170

  18. Improving health outcomes for youth living with the human immunodeficiency virus: a multisite randomized trial of a motivational intervention targeting multiple risk behaviors.

    PubMed

    Naar-King, Sylvie; Parsons, Jeffrey T; Murphy, Debra A; Chen, Xinguang; Harris, D Robert; Belzer, Marvin E

    2009-12-01

    To determine if Healthy Choices, a motivational interviewing intervention targeting multiple risk behaviors, improved human immunodeficiency virus (HIV) viral load. A randomized, 2-group repeated measures design with analysis of data from baseline and 6- and 9-month follow-up collected from 2005 to 2007. Five US adolescent medicine HIV clinics. A convenience sample with at least 1 of 3 risk behaviors (nonadherence to HIV medications, substance abuse, and unprotected sex) was enrolled. The sample was aged 16 to 24 years and primarily African American. Of the 205 enrolled, 19 did not complete baseline data collections, for a final sample size of 186. Young people living with HIV were randomized to the intervention plus specialty care (n = 94) or specialty care alone (n = 92). The 3- and 6-month follow-up rates, respectively, were 86% and 82% for the intervention group and 81% and 73% for controls. Intervention Healthy Choices was a 4-session individual clinic-based motivational interviewing intervention delivered during a 10-week period. Motivational interviewing is a method of communication designed to elicit and reinforce intrinsic motivation for change. Outcome Measure Plasma viral load. Youth randomized to Healthy Choices showed a significant decline in viral load at 6 months postintervention compared with youth in the control condition (beta = -0.36, t = -2.15, P = .03), with those prescribed antiretroviral medications showing the lowest viral loads. Differences were no longer significant at 9 months. A motivational interviewing intervention targeting multiple risk behaviors resulted in short-term improvements in viral load for youth living with HIV. Trial Registration clinicaltrials.gov Identifier: NCT00103532.

  19. Alzheimer's disease and diet: a systematic review.

    PubMed

    Yusufov, Miryam; Weyandt, Lisa L; Piryatinsky, Irene

    2017-02-01

    Purpose/Aim: Approximately 44 million people worldwide have Alzheimer's disease (AD). Numerous claims have been made regarding the influence of diet on AD development. The aims of this systematic review were to summarize the evidence considering diet as a protective or risk factor for AD, identify methodological challenges and limitations, and provide future research directions. Medline, PsycINFO and PsycARTICLES were searched for articles that examined the relationship between diet and AD. On the basis of the inclusion and exclusion criteria, 64 studies were included, generating a total of 141 dietary patterns or "models". All studies were published between 1997 and 2015, with a total of 132 491 participants. Twelve studies examined the relationship between a Mediterranean (MeDi) diet and AD development, 10 of which revealed a significant association. Findings were inconsistent with respect to sample size, AD diagnosis and food measures. Further, the majority of studies (81.3%) included samples with mean baseline ages that were at risk for AD based on age (>65 years), ranging from 52.0 to 85.4 years. The range of follow-up periods was 1.5-32.0 years. The mean age of the samples poses a limitation in determining the influence of diet on AD; given that AD has a long prodromal phase prior to the manifestation of symptoms and decline. Further studies are necessary to determine whether diet is a risk or protective factor for AD, foster translation of research into clinical practice and elucidate dietary recommendations. Despite the methodological limitations, the finding that 50 of the 64 reviewed studies revealed an association between diet and AD incidence offers promising implications for diet as a modifiable risk factor for AD.

  20. Hierarchical modeling of cluster size in wildlife surveys

    USGS Publications Warehouse

    Royle, J. Andrew

    2008-01-01

    Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).

  1. What is a species? A new universal method to measure differentiation and assess the taxonomic rank of allopatric populations, using continuous variables

    PubMed Central

    Donegan, Thomas M.

    2018-01-01

    Abstract Existing models for assigning species, subspecies, or no taxonomic rank to populations which are geographically separated from one another were analyzed. This was done by subjecting over 3,000 pairwise comparisons of vocal or biometric data based on birds to a variety of statistical tests that have been proposed as measures of differentiation. One current model which aims to test diagnosability (Isler et al. 1998) is highly conservative, applying a hard cut-off, which excludes from consideration differentiation below diagnosis. It also includes non-overlap as a requirement, a measure which penalizes increases to sample size. The “species scoring” model of Tobias et al. (2010) involves less drastic cut-offs, but unlike Isler et al. (1998), does not control adequately for sample size and attributes scores in many cases to differentiation which is not statistically significant. Four different models of assessing effect sizes were analyzed: using both pooled and unpooled standard deviations and controlling for sample size using t-distributions or omitting to do so. Pooled standard deviations produced more conservative effect sizes when uncontrolled for sample size but less conservative effect sizes when so controlled. Pooled models require assumptions to be made that are typically elusive or unsupported for taxonomic studies. Modifications to improving these frameworks are proposed, including: (i) introducing statistical significance as a gateway to attributing any weighting to findings of differentiation; (ii) abandoning non-overlap as a test; (iii) recalibrating Tobias et al. (2010) scores based on effect sizes controlled for sample size using t-distributions. A new universal method is proposed for measuring differentiation in taxonomy using continuous variables and a formula is proposed for ranking allopatric populations. This is based first on calculating effect sizes using unpooled standard deviations, controlled for sample size using t-distributions, for a series of different variables. All non-significant results are excluded by scoring them as zero. Distance between any two populations is calculated using Euclidian summation of non-zeroed effect size scores. If the score of an allopatric pair exceeds that of a related sympatric pair, then the allopatric population can be ranked as species and, if not, then at most subspecies rank should be assigned. A spreadsheet has been programmed and is being made available which allows this and other tests of differentiation and rank studied in this paper to be rapidly analyzed. PMID:29780266

  2. The intriguing evolution of effect sizes in biomedical research over time: smaller but more often statistically significant.

    PubMed

    Monsarrat, Paul; Vergnes, Jean-Noel

    2018-01-01

    In medicine, effect sizes (ESs) allow the effects of independent variables (including risk/protective factors or treatment interventions) on dependent variables (e.g., health outcomes) to be quantified. Given that many public health decisions and health care policies are based on ES estimates, it is important to assess how ESs are used in the biomedical literature and to investigate potential trends in their reporting over time. Through a big data approach, the text mining process automatically extracted 814 120 ESs from 13 322 754 PubMed abstracts. Eligible ESs were risk ratio, odds ratio, and hazard ratio, along with their confidence intervals. Here we show a remarkable decrease of ES values in PubMed abstracts between 1990 and 2015 while, concomitantly, results become more often statistically significant. Medians of ES values have decreased over time for both "risk" and "protective" values. This trend was found in nearly all fields of biomedical research, with the most marked downward tendency in genetics. Over the same period, the proportion of statistically significant ESs increased regularly: among the abstracts with at least 1 ES, 74% were statistically significant in 1990-1995, vs 85% in 2010-2015. whereas decreasing ESs could be an intrinsic evolution in biomedical research, the concomitant increase of statistically significant results is more intriguing. Although it is likely that growing sample sizes in biomedical research could explain these results, another explanation may lie in the "publish or perish" context of scientific research, with the probability of a growing orientation toward sensationalism in research reports. Important provisions must be made to improve the credibility of biomedical research and limit waste of resources. © The Authors 2017. Published by Oxford University Press.

  3. Regular analgesic use and risk of multiple myeloma.

    PubMed

    Moysich, Kirsten B; Bonner, Mathew R; Beehler, Gregory P; Marshall, James R; Menezes, Ravi J; Baker, Julie A; Weiss, Joli R; Chanan-Khan, Asher

    2007-04-01

    Analgesic use has been implicated in the chemoprevention of a number of solid tumors, but to date no previous research has focused on the role of analgesics in the etiology of multiple myeloma (MM). We conducted a hospital-based case-control study of 117 patients with primary, incident MM and 483 age and residence matched controls without benign or malignant neoplasms. All participants received medical services at Roswell Park Cancer Institute in Buffalo, NY, and completed a comprehensive epidemiological questionnaire. Participants who reported analgesic use at least once a week for at least 6 months were classified as regular users; individuals who did not use analgesics regularly served as the reference group throughout the analyses. We used unconditional logistic regression analyses to compute crude and adjusted odds ratios (ORs) with corresponding 95% confidence intervals (CIs). Compared to non-users, regular aspirin users were not at reduced risk of MM (adjusted OR=0.99; 95% CI 0.65-1.49), nor were participants with the highest frequency or duration of aspirin use. A significant risk elevation was found for participants who were regular acetaminophen users (adjusted OR=2.95; 95% CI 1.72-5.08). Further, marked increases in risk of MM were noted with both greater frequency (>7 tablets weekly; adjusted OR=4.36; 95% CI 1.70-11.2) and greater duration (>10 years; adjusted OR=3.26; 95% CI 1.52-7.02) of acetaminophen use. We observed no evidence of a chemoprotective effect of aspirin on MM risk, but observed significant risk elevations with various measures of acetaminophen use. Our results warrant further investigation in population-based case-control and cohort studies and should be interpreted with caution in light of the limited sample size and biases inherent in hospital-based studies.

  4. Estuarine sediment toxicity tests on diatoms: Sensitivity comparison for three species

    NASA Astrophysics Data System (ADS)

    Moreno-Garrido, Ignacio; Lubián, Luis M.; Jiménez, Begoña; Soares, Amadeu M. V. M.; Blasco, Julián

    2007-01-01

    Experimental populations of three marine and estuarine diatoms were exposed to sediments with different levels of pollutants, collected from the Aveiro Lagoon (NW of Portugal). The species selected were Cylindrotheca closterium, Phaeodactylum tricornutum and Navicula sp. Previous experiments were designed to determine the influence of the sediment particle size distribution on growth of the species assayed. Percentage of silt-sized sediment affect to growth of the selected species in the experimental conditions: the higher percentage of silt-sized sediment, the lower growth. In any case, percentages of silt-sized sediment less than 10% did not affect growth. In general, C. closterium seems to be slightly more sensitive to the selected sediments than the other two species. Two groups of sediment samples were determined as a function of the general response of the exposed microalgal populations: three of the six samples used were more toxic than the other three. Chemical analysis of the samples was carried out in order to determine the specific cause of differences in toxicity. After a statistical analysis, concentrations of Sn, Zn, Hg, Cu and Cr (among all physico-chemical analyzed parameters), in order of importance, were the most important factors that divided the two groups of samples (more and less toxic samples). Benthic diatoms seem to be sensitive organisms in sediment toxicity tests. Toxicity data from bioassays involving microphytobentos should be taken into account when environmental risks are calculated.

  5. Comparison of cluster-based and source-attribution methods for estimating transmission risk using large HIV sequence databases.

    PubMed

    Le Vu, Stéphane; Ratmann, Oliver; Delpech, Valerie; Brown, Alison E; Gill, O Noel; Tostevin, Anna; Fraser, Christophe; Volz, Erik M

    2018-06-01

    Phylogenetic clustering of HIV sequences from a random sample of patients can reveal epidemiological transmission patterns, but interpretation is hampered by limited theoretical support and statistical properties of clustering analysis remain poorly understood. Alternatively, source attribution methods allow fitting of HIV transmission models and thereby quantify aspects of disease transmission. A simulation study was conducted to assess error rates of clustering methods for detecting transmission risk factors. We modeled HIV epidemics among men having sex with men and generated phylogenies comparable to those that can be obtained from HIV surveillance data in the UK. Clustering and source attribution approaches were applied to evaluate their ability to identify patient attributes as transmission risk factors. We find that commonly used methods show a misleading association between cluster size or odds of clustering and covariates that are correlated with time since infection, regardless of their influence on transmission. Clustering methods usually have higher error rates and lower sensitivity than source attribution method for identifying transmission risk factors. But neither methods provide robust estimates of transmission risk ratios. Source attribution method can alleviate drawbacks from phylogenetic clustering but formal population genetic modeling may be required to estimate quantitative transmission risk factors. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Reproducibility of preclinical animal research improves with heterogeneity of study samples

    PubMed Central

    Vogt, Lucile; Sena, Emily S.; Würbel, Hanno

    2018-01-01

    Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495

  7. Integrative genetic risk prediction using non-parametric empirical Bayes classification.

    PubMed

    Zhao, Sihai Dave

    2017-06-01

    Genetic risk prediction is an important component of individualized medicine, but prediction accuracies remain low for many complex diseases. A fundamental limitation is the sample sizes of the studies on which the prediction algorithms are trained. One way to increase the effective sample size is to integrate information from previously existing studies. However, it can be difficult to find existing data that examine the target disease of interest, especially if that disease is rare or poorly studied. Furthermore, individual-level genotype data from these auxiliary studies are typically difficult to obtain. This article proposes a new approach to integrative genetic risk prediction of complex diseases with binary phenotypes. It accommodates possible heterogeneity in the genetic etiologies of the target and auxiliary diseases using a tuning parameter-free non-parametric empirical Bayes procedure, and can be trained using only auxiliary summary statistics. Simulation studies show that the proposed method can provide superior predictive accuracy relative to non-integrative as well as integrative classifiers. The method is applied to a recent study of pediatric autoimmune diseases, where it substantially reduces prediction error for certain target/auxiliary disease combinations. The proposed method is implemented in the R package ssa. © 2016, The International Biometric Society.

  8. Unfolding sphere size distributions with a density estimator based on Tikhonov regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weese, J.; Korat, E.; Maier, D.

    1997-12-01

    This report proposes a method for unfolding sphere size distributions given a sample of radii that combines the advantages of a density estimator with those of Tikhonov regularization methods. The following topics are discusses in this report to achieve this method: the relation between the profile and the sphere size distribution; the method for unfolding sphere size distributions; the results based on simulations; and the experimental data comparison.

  9. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    PubMed

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Inhalation exposure during spray application and subsequent sanding of a wood sealant containing zinc oxide nanoparticles.

    PubMed

    Cooper, Michael R; West, Gavin H; Burrelli, Leonard G; Dresser, Daniel; Griffin, Kelsey N; Segrave, Alan M; Perrenoud, Jon; Lippy, Bruce E

    2017-07-01

    Nano-enabled construction products have entered into commerce. There are concerns about the safety of manufactured nanomaterials, and exposure assessments are needed for a more complete understanding of risk. This study assessed potential inhalation exposure to ZnO nanoparticles during spray application and power sanding of a commercially available wood sealant and evaluated the effectiveness of local exhaust ventilation in reducing exposure. A tradesperson performed the spraying and sanding inside an environmentally-controlled chamber. Dust control methods during sanding were compared. Filter-based sampling, electron microscopy, and real-time particle counters provided measures of exposure. Airborne nanoparticles above background levels were detected by particle counters for all exposure scenarios. Nanoparticle number concentrations and particle size distributions were similar for sanding of treated versus untreated wood. Very few unbound nanoparticles were detected in aerosol samples via electron microscopy, rather nano-sized ZnO was contained within, or on the surface of larger airborne particles. Whether the presence of nanoscale ZnO in these aerosols affects toxicity merits further investigation. Mass-based exposure measurements were below the NIOSH Recommended Exposure Limit for Zn, although there are no established exposure limits for nanoscale ZnO. Local exhaust ventilation was effective, reducing airborne nanoparticle number concentrations by up to 92% and reducing personal exposure to total dust by at least 80% in terms of mass. Given the discrepancies between the particle count data and electron microscopy observations, the chemical identity of the airborne nanoparticles detected by the particle counters remains uncertain. Prior studies attributed the main source of nanoparticle emissions during sanding to copper nanoparticles generated from electric sander motors. Potentially contrary results are presented suggesting the sander motor may not have been the primary source of nanoparticle emissions in this study. Further research is needed to understand potential risks faced by construction workers exposed to mixed aerosols containing manufactured nanomaterials. Until these risks are better understood, this study demonstrates that engineering controls can reduce exposure to manufactured nanomaterials; doing so may be prudent for protecting worker health.

  11. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  12. What is an adequate sample size? Operationalising data saturation for theory-based interview studies.

    PubMed

    Francis, Jill J; Johnston, Marie; Robertson, Clare; Glidewell, Liz; Entwistle, Vikki; Eccles, Martin P; Grimshaw, Jeremy M

    2010-12-01

    In interview studies, sample size is often justified by interviewing participants until reaching 'data saturation'. However, there is no agreed method of establishing this. We propose principles for deciding saturation in theory-based interview studies (where conceptual categories are pre-established by existing theory). First, specify a minimum sample size for initial analysis (initial analysis sample). Second, specify how many more interviews will be conducted without new ideas emerging (stopping criterion). We demonstrate these principles in two studies, based on the theory of planned behaviour, designed to identify three belief categories (Behavioural, Normative and Control), using an initial analysis sample of 10 and stopping criterion of 3. Study 1 (retrospective analysis of existing data) identified 84 shared beliefs of 14 general medical practitioners about managing patients with sore throat without prescribing antibiotics. The criterion for saturation was achieved for Normative beliefs but not for other beliefs or studywise saturation. In Study 2 (prospective analysis), 17 relatives of people with Paget's disease of the bone reported 44 shared beliefs about taking genetic testing. Studywise data saturation was achieved at interview 17. We propose specification of these principles for reporting data saturation in theory-based interview studies. The principles may be adaptable for other types of studies.

  13. Risk Factors for Incident Chronic Insomnia: A General Population Prospective Study

    PubMed Central

    Singareddy, Ravi; Vgontzas, Alexandros N.; Fernandez-Mendoza, Julio; Liao, Duanping; Calhoun, Susan; Shaffer, Michele L.; Bixler, Edward O.

    2012-01-01

    Objective The few population-based, prospective studies that have examined risk factors of incident insomnia were limited by small sample size, short follow-up, and lack of data on medical disorders or polysomnography. We prospectively examined the associations between demographics, behavioral factors, psychiatric and medical disorders, and polysomnography with incident chronic insomnia. Methods From a random, general population sample of 1741 individuals of the adult Penn State Sleep Cohort, 1395 were followed-up after 7.5 years. Only subjects without chronic insomnia at baseline (n=1246) were included in this study. Structured medical and psychiatric history, personality testing, and 8-hour polysomnography were obtained at baseline. Structured sleep history was obtained at baseline and follow-up. Results Incidence of chronic insomnia was 9.3%, with a higher incidence in women (12.9%) than in men (6.2%). Younger age (20–35 years), non-white ethnicity, and obesity increased the risk of chronic insomnia. Poor sleep and mental health were stronger predictors of incident chronic insomnia compared to physical health. Higher scores in MMPI-2, indicating maladaptive personality traits, and excessive use of coffee at baseline predicted incident chronic insomnia. Polysomnographic variables, such as short sleep duration or sleep apnea, did not predict incident chronic insomnia. Conclusion Mental health, poor sleep, and obesity, but not sleep apnea, are significant risk factors for incident chronic insomnia. Focusing on these more vulnerable groups and addressing the modifiable risk factors may help reduce the incident of chronic insomnia, a common and chronic sleep disorder associated with significant medical and psychiatric morbidity and mortality. PMID:22425576

  14. Interpreting and Reporting Effect Sizes in Research Investigations.

    ERIC Educational Resources Information Center

    Tapia, Martha; Marsh, George E., II

    Since 1994, the American Psychological Association (APA) has advocated the inclusion of effect size indices in reporting research to elucidate the statistical significance of studies based on sample size. In 2001, the fifth edition of the APA "Publication Manual" stressed the importance of including an index of effect size to clarify…

  15. Elemental Concentrations in Roadside Dust Along Two National Highways in Northern Vietnam and the Health-Risk Implication.

    PubMed

    Phi, Thai Ha; Chinh, Pham Minh; Cuong, Doan Danh; Ly, Luong Thi Mai; Van Thinh, Nguyen; Thai, Phong K

    2018-01-01

    There is a need to assess the risk of exposure to metals via roadside dust in Vietnam where many people live along the road/highways and are constantly exposed to roadside dust. In this study, we collected dust samples at 55 locations along two major Highways in north-east Vietnam, which passed through different land use areas. Samples were sieved into three different particle sizes and analyzed for concentrations of eight metals using a X-ray fluorescence instrument. The concentrations and environmental indices (EF, I geo ) of metals were used to evaluate the degree of pollution in the samples. Among different land uses, industrial areas could be highly polluted with heavy metals in roadside dust, followed by commerce and power plants. Additionally, the traffic density probably played an important role; higher concentrations were observed in samples from Highway No. 5 where traffic is several times higher than Highway No. 18. According to the risk assessment, Cr poses the highest noncarcinogenic risk even though the health hazard index values of assessed heavy metals in this study were within the acceptable range. Our assessment also found that the risk of exposure to heavy metals through roadside dust is much higher for children than for adults.

  16. Criteria for assessing the ecological risk of nonylphenol for aquatic life in Chinese surface fresh water.

    PubMed

    Zhang, Liangmao; Wei, Caidi; Zhang, Hui; Song, Mingwei

    2017-10-01

    The typical environmental endocrine disruptor nonylphenol is becoming an increasingly common pollutant in both fresh and salt water; it compromises the growth and development of many aquatic organisms. As yet, water quality criteria with respect to nonylphenol pollution have not been established in China. Here, the predicted "no effect concentration" of nonylphenol was derived from an analysis of species sensitivity distribution covering a range of species mainly native to China, as a means of quantifying the ecological risk of nonylphenol in surface fresh water. The resulting model, based on the log-logistic distribution, proved to be robust; the minimum sample sizes required for generating a stable estimate of HC 5 were 12 for acute toxicity and 13 for chronic toxicity. The criteria maximum concentration and criteria continuous concentration were, respectively 18.49 μg L -1 and 1.85 μg L -1 . Among the 24 sites surveyed, two were associated with a high ecological risk (risk quotient >1) and 12 with a moderate ecological risk (risk quotient >0.1). The potentially affected fraction ranged from 0.008% to 24.600%. The analysis provides a theoretical basis for both short- and long-term risk assessments with respect to nonylphenol, and also a means to quantify the risk to aquatic ecosystems. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Determining the risk of cardiovascular disease using ion mobility of lipoproteins

    DOEpatents

    Benner, W. Henry; Krauss, Ronald M.; Blanche, Patricia J.

    2010-05-11

    A medical diagnostic method and instrumentation system for analyzing noncovalently bonded agglomerated biological particles is described. The method and system comprises: a method of preparation for the biological particles; an electrospray generator; an alpha particle radiation source; a differential mobility analyzer; a particle counter; and data acquisition and analysis means. The medical device is useful for the assessment of human diseases, such as cardiac disease risk and hyperlipidemia, by rapid quantitative analysis of lipoprotein fraction densities. Initially, purification procedures are described to reduce an initial blood sample to an analytical input to the instrument. The measured sizes from the analytical sample are correlated with densities, resulting in a spectrum of lipoprotein densities. The lipoprotein density distribution can then be used to characterize cardiac and other lipid-related health risks.

  18. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    PubMed

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  19. On estimation of time-dependent attributable fraction from population-based case-control studies.

    PubMed

    Zhao, Wei; Chen, Ying Qing; Hsu, Li

    2017-09-01

    Population attributable fraction (PAF) is widely used to quantify the disease burden associated with a modifiable exposure in a population. It has been extended to a time-varying measure that provides additional information on when and how the exposure's impact varies over time for cohort studies. However, there is no estimation procedure for PAF using data that are collected from population-based case-control studies, which, because of time and cost efficiency, are commonly used for studying genetic and environmental risk factors of disease incidences. In this article, we show that time-varying PAF is identifiable from a case-control study and develop a novel estimator of PAF. Our estimator combines odds ratio estimates from logistic regression models and density estimates of the risk factor distribution conditional on failure times in cases from a kernel smoother. The proposed estimator is shown to be consistent and asymptotically normal with asymptotic variance that can be estimated empirically from the data. Simulation studies demonstrate that the proposed estimator performs well in finite sample sizes. Finally, the method is illustrated by a population-based case-control study of colorectal cancer. © 2017, The International Biometric Society.

  20. The predictive efficiency of school bullying versus later offending: a systematic/meta-analytic review of longitudinal studies.

    PubMed

    Ttofi, Maria M; Farrington, David P; Lösel, Friedrich; Loeber, Rolf

    2011-04-01

    Although bullying and delinquency share similar risk factors, no previous systematic review has ever been conducted to examine possible links between school bullying and criminal offending later in life. To investigate the extent to which bullying perpetration at school predicts offending later in life, and whether this relation holds after controlling for other major childhood risk factors. Results are based on a thorough systematic review and meta-analysis of studies measuring school bullying and later offending. Effect sizes are based on both published and unpublished studies; longitudinal investigators of 28 studies have conducted specific analyses for our review. The probability of offending up to 11 years later was much higher for school bullies than for non-involved students [odds ratio (OR) = 2.50; 95% confidence interval (CI): 2.03-3.08]. Bullying perpetration was a significant risk factor for later offending, even after controlling for major childhood risk factors (OR = 1.82, 95% CI: 1.55-2.14). Effect sizes were smaller when the follow-up period was longer and larger when bullying was assessed in older children. The age of participants when outcome measures were taken was negatively related with effect sizes. Finally, the summary effect size did not decrease much as the number of controlled risk factors increased. School bullying is a strong and specific risk factor for later offending. Effective anti-bullying programmes should be promoted, and could be viewed as a form of early crime prevention. Such programmes would have a high benefit : cost ratio. Copyright © 2011 John Wiley & Sons, Ltd.

  1. A U-statistics based approach to sample size planning of two-arm trials with discrete outcome criterion aiming to establish either superiority or noninferiority.

    PubMed

    Wellek, Stefan

    2017-02-28

    In current practice, the most frequently applied approach to the handling of ties in the Mann-Whitney-Wilcoxon (MWW) test is based on the conditional distribution of the sum of mid-ranks, given the observed pattern of ties. Starting from this conditional version of the testing procedure, a sample size formula was derived and investigated by Zhao et al. (Stat Med 2008). In contrast, the approach we pursue here is a nonconditional one exploiting explicit representations for the variances of and the covariance between the two U-statistics estimators involved in the Mann-Whitney form of the test statistic. The accuracy of both ways of approximating the sample sizes required for attaining a prespecified level of power in the MWW test for superiority with arbitrarily tied data is comparatively evaluated by means of simulation. The key qualitative conclusions to be drawn from these numerical comparisons are as follows: With the sample sizes calculated by means of the respective formula, both versions of the test maintain the level and the prespecified power with about the same degree of accuracy. Despite the equivalence in terms of accuracy, the sample size estimates obtained by means of the new formula are in many cases markedly lower than that calculated for the conditional test. Perhaps, a still more important advantage of the nonconditional approach based on U-statistics is that it can be also adopted for noninferiority trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Robustness of methods for blinded sample size re-estimation with overdispersed count data.

    PubMed

    Schneider, Simon; Schmidli, Heinz; Friede, Tim

    2013-09-20

    Counts of events are increasingly common as primary endpoints in randomized clinical trials. With between-patient heterogeneity leading to variances in excess of the mean (referred to as overdispersion), statistical models reflecting this heterogeneity by mixtures of Poisson distributions are frequently employed. Sample size calculation in the planning of such trials requires knowledge on the nuisance parameters, that is, the control (or overall) event rate and the overdispersion parameter. Usually, there is only little prior knowledge regarding these parameters in the design phase resulting in considerable uncertainty regarding the sample size. In this situation internal pilot studies have been found very useful and very recently several blinded procedures for sample size re-estimation have been proposed for overdispersed count data, one of which is based on an EM-algorithm. In this paper we investigate the EM-algorithm based procedure with respect to aspects of their implementation by studying the algorithm's dependence on the choice of convergence criterion and find that the procedure is sensitive to the choice of the stopping criterion in scenarios relevant to clinical practice. We also compare the EM-based procedure to other competing procedures regarding their operating characteristics such as sample size distribution and power. Furthermore, the robustness of these procedures to deviations from the model assumptions is explored. We find that some of the procedures are robust to at least moderate deviations. The results are illustrated using data from the US National Heart, Lung and Blood Institute sponsored Asymptomatic Cardiac Ischemia Pilot study. Copyright © 2013 John Wiley & Sons, Ltd.

  3. [Attributable risk of co-morbid substance use disorder in poor observance to pharmacological treatment and the occurrence of relapse in schizophrenia].

    PubMed

    Ameller, A; Gorwood, P

    2015-04-01

    There are numerous risk factors involved in poor (incomplete) compliance to pharmacological treatment, and the associated relapse risk, for patients with schizophrenia. Comorbid substance use disorders are considered as among the most important ones, although how much their presence increase the risk of poorer observance (and higher risk of relapse) has not been yet assessed. This measure would be important, especially if the published literature on the topic provides sufficient material to perform a meta-analysis and to assess different potential biases such as those related to time (new studies are easier to publish when positive) or sample size (small samples might drive the global positive conclusion). A PubMed(®) search was made, screening the following terms between 1996 and august 2014 "Addiction AND (Observance OR Adherence) AND schizophrenia AND (French OR English [Language])" and "(Substance Abuse OR substance dependance) AND Outcome AND schizophrenia AND (French OR English [Language])". Studies were included if they describe two patients groups (schizophrenia with and without present substance use disorder) and assess the studied outcome. MetaWin(®) version 2 was used for the meta-analysis, while publication time bias relied on non-parametric correlation and the one linked to sample size was assessed through normal quantile plots. An attributable risk was also computered, on the basis of the odds-ratio derived from the meta-analysis and the prevalence of the analyzed trait (associated substance use disorder). Eight studies could be included in the meta-analysis, showing that the presence of a substance use disorder significantly increases the risk of poor observance to pharmacological treatment (OR=2.18 [1.84-2.58]), no significant bias being detected, either linked to time (rho=0.287, P=0.490) or sample size (Kendall's Tau=-0.286, P=0.322). The related attributable risk is 18.50%. Only three studies could be used for the meta-analysis of the risk of relapse associated with the presence of substance use disorders. The corresponding odds-ratio is 1.52 [1.19-1.94], and the attributable risk is 31.20%, but the search for biases could not be performed because of the small number of studies. These results shed light on the importance of comorbid substance use disorder to explain the poor observance frequently observed in patients with schizophrenia. Indeed, having an associated substance use disorder double the risk of poor compliance to pharmacological treatment, this comorbidity explaining a fifth of all factors involved. Although the number of available studies does not allow definite conclusions, the meta-analysis of prospective studies focusing this time of the risk of relapse requiring hospitalization is also in favor of a significant role of associated substance use disorder. These results argue in favor of developing specific strategies to better treat patients with dual diagnoses, i.e. schizophrenia and substance use disorder. Copyright © 2015 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.

  4. Improving the quality of biomarker discovery research: the right samples and enough of them.

    PubMed

    Pepe, Margaret S; Li, Christopher I; Feng, Ziding

    2015-06-01

    Biomarker discovery research has yielded few biomarkers that validate for clinical use. A contributing factor may be poor study designs. The goal in discovery research is to identify a subset of potentially useful markers from a large set of candidates assayed on case and control samples. We recommend the PRoBE design for selecting samples. We propose sample size calculations that require specifying: (i) a definition for biomarker performance; (ii) the proportion of useful markers the study should identify (Discovery Power); and (iii) the tolerable number of useless markers amongst those identified (False Leads Expected, FLE). We apply the methodology to a study of 9,000 candidate biomarkers for risk of colon cancer recurrence where a useful biomarker has positive predictive value ≥ 30%. We find that 40 patients with recurrence and 160 without recurrence suffice to filter out 98% of useless markers (2% FLE) while identifying 95% of useful biomarkers (95% Discovery Power). Alternative methods for sample size calculation required more assumptions. Biomarker discovery research should utilize quality biospecimen repositories and include sample sizes that enable markers meeting prespecified performance characteristics for well-defined clinical applications to be identified. The scientific rigor of discovery research should be improved. ©2015 American Association for Cancer Research.

  5. Spatial variations in annual cycles of body-size spectra of planktonic ciliates and their environmental drivers in marine ecosystems.

    PubMed

    Xu, Henglong; Jiang, Yong; Xu, Guangjian

    2016-11-15

    Body-size spectra has proved to be a useful taxon-free resolution to summarize a community structure for bioassessment. The spatial variations in annual cycles of body-size spectra of planktonic ciliates and their environmental drivers were studied based on an annual dataset. Samples were biweekly collected at five stations in a bay of the Yellow Sea, northern China during a 1-year cycle. Based on a multivariate approach, the second-stage analysis, it was shown that the annual cycles of the body-size spectra were significantly different among five sampling stations. Correlation analysis demonstrated that the spatial variations in the body-size spectra were significantly related to changes of environmental conditions, especially dissolved nitrogen, alone or in combination with salinity and dissolve oxygen. Based on results, it is suggested that the nutrients may be the environmental drivers to shape the spatial variations in annual cycles of planktonic ciliates in terms of body-size spectra in marine ecosystems. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. The effective elastic properties of human trabecular bone may be approximated using micro-finite element analyses of embedded volume elements.

    PubMed

    Daszkiewicz, Karol; Maquer, Ghislain; Zysset, Philippe K

    2017-06-01

    Boundary conditions (BCs) and sample size affect the measured elastic properties of cancellous bone. Samples too small to be representative appear stiffer under kinematic uniform BCs (KUBCs) than under periodicity-compatible mixed uniform BCs (PMUBCs). To avoid those effects, we propose to determine the effective properties of trabecular bone using an embedded configuration. Cubic samples of various sizes (2.63, 5.29, 7.96, 10.58 and 15.87 mm) were cropped from [Formula: see text] scans of femoral heads and vertebral bodies. They were converted into [Formula: see text] models and their stiffness tensor was established via six uniaxial and shear load cases. PMUBCs- and KUBCs-based tensors were determined for each sample. "In situ" stiffness tensors were also evaluated for the embedded configuration, i.e. when the loads were transmitted to the samples via a layer of trabecular bone. The Zysset-Curnier model accounting for bone volume fraction and fabric anisotropy was fitted to those stiffness tensors, and model parameters [Formula: see text] (Poisson's ratio) [Formula: see text] and [Formula: see text] (elastic and shear moduli) were compared between sizes. BCs and sample size had little impact on [Formula: see text]. However, KUBCs- and PMUBCs-based [Formula: see text] and [Formula: see text], respectively, decreased and increased with growing size, though convergence was not reached even for our largest samples. Both BCs produced upper and lower bounds for the in situ values that were almost constant across samples dimensions, thus appearing as an approximation of the effective properties. PMUBCs seem also appropriate for mimicking the trabecular core, but they still underestimate its elastic properties (especially in shear) even for nearly orthotropic samples.

  7. Acute Respiratory Distress Syndrome Measurement Error. Potential Effect on Clinical Study Results

    PubMed Central

    Cooke, Colin R.; Iwashyna, Theodore J.; Hofer, Timothy P.

    2016-01-01

    Rationale: Identifying patients with acute respiratory distress syndrome (ARDS) is a recognized challenge. Experts often have only moderate agreement when applying the clinical definition of ARDS to patients. However, no study has fully examined the implications of low reliability measurement of ARDS on clinical studies. Objectives: To investigate how the degree of variability in ARDS measurement commonly reported in clinical studies affects study power, the accuracy of treatment effect estimates, and the measured strength of risk factor associations. Methods: We examined the effect of ARDS measurement error in randomized clinical trials (RCTs) of ARDS-specific treatments and cohort studies using simulations. We varied the reliability of ARDS diagnosis, quantified as the interobserver reliability (κ-statistic) between two reviewers. In RCT simulations, patients identified as having ARDS were enrolled, and when measurement error was present, patients without ARDS could be enrolled. In cohort studies, risk factors as potential predictors were analyzed using reviewer-identified ARDS as the outcome variable. Measurements and Main Results: Lower reliability measurement of ARDS during patient enrollment in RCTs seriously degraded study power. Holding effect size constant, the sample size necessary to attain adequate statistical power increased by more than 50% as reliability declined, although the result was sensitive to ARDS prevalence. In a 1,400-patient clinical trial, the sample size necessary to maintain similar statistical power increased to over 1,900 when reliability declined from perfect to substantial (κ = 0.72). Lower reliability measurement diminished the apparent effectiveness of an ARDS-specific treatment from a 15.2% (95% confidence interval, 9.4–20.9%) absolute risk reduction in mortality to 10.9% (95% confidence interval, 4.7–16.2%) when reliability declined to moderate (κ = 0.51). In cohort studies, the effect on risk factor associations was similar. Conclusions: ARDS measurement error can seriously degrade statistical power and effect size estimates of clinical studies. The reliability of ARDS measurement warrants careful attention in future ARDS clinical studies. PMID:27159648

  8. Overcoming intratumoural heterogeneity for reproducible molecular risk stratification: a case study in advanced kidney cancer.

    PubMed

    Lubbock, Alexander L R; Stewart, Grant D; O'Mahony, Fiach C; Laird, Alexander; Mullen, Peter; O'Donnell, Marie; Powles, Thomas; Harrison, David J; Overton, Ian M

    2017-06-26

    Metastatic clear cell renal cell cancer (mccRCC) portends a poor prognosis and urgently requires better clinical tools for prognostication as well as for prediction of response to treatment. Considerable investment in molecular risk stratification has sought to overcome the performance ceiling encountered by methods restricted to traditional clinical parameters. However, replication of results has proven challenging, and intratumoural heterogeneity (ITH) may confound attempts at tissue-based stratification. We investigated the influence of confounding ITH on the performance of a novel molecular prognostic model, enabled by pathologist-guided multiregion sampling (n = 183) of geographically separated mccRCC cohorts from the SuMR trial (development, n = 22) and the SCOTRRCC study (validation, n = 22). Tumour protein levels quantified by reverse phase protein array (RPPA) were investigated alongside clinical variables. Regularised wrapper selection identified features for Cox multivariate analysis with overall survival as the primary endpoint. The optimal subset of variables in the final stratification model consisted of N-cadherin, EPCAM, Age, mTOR (NEAT). Risk groups from NEAT had a markedly different prognosis in the validation cohort (log-rank p = 7.62 × 10 -7 ; hazard ratio (HR) 37.9, 95% confidence interval 4.1-353.8) and 2-year survival rates (accuracy = 82%, Matthews correlation coefficient = 0.62). Comparisons with established clinico-pathological scores suggest favourable performance for NEAT (Net reclassification improvement 7.1% vs International Metastatic Database Consortium score, 25.4% vs Memorial Sloan Kettering Cancer Center score). Limitations include the relatively small cohorts and associated wide confidence intervals on predictive performance. Our multiregion sampling approach enabled investigation of NEAT validation when limiting the number of samples analysed per tumour, which significantly degraded performance. Indeed, sample selection could change risk group assignment for 64% of patients, and prognostication with one sample per patient performed only slightly better than random expectation (median logHR = 0.109). Low grade tissue was associated with 3.5-fold greater variation in predicted risk than high grade (p = 0.044). This case study in mccRCC quantitatively demonstrates the critical importance of tumour sampling for the success of molecular biomarker studies research where ITH is a factor. The NEAT model shows promise for mccRCC prognostication and warrants follow-up in larger cohorts. Our work evidences actionable parameters to guide sample collection (tumour coverage, size, grade) to inform the development of reproducible molecular risk stratification methods.

  9. A probabilistic asteroid impact risk model: assessment of sub-300 m impacts

    NASA Astrophysics Data System (ADS)

    Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.

    2017-06-01

    A comprehensive asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain input parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions for objects up to 300 m in diameter. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data have little effect on the metrics of interest.

  10. The effects of sample size on population genomic analyses--implications for the tests of neutrality.

    PubMed

    Subramanian, Sankar

    2016-02-20

    One of the fundamental measures of molecular genetic variation is the Watterson's estimator (θ), which is based on the number of segregating sites. The estimation of θ is unbiased only under neutrality and constant population growth. It is well known that the estimation of θ is biased when these assumptions are violated. However, the effects of sample size in modulating the bias was not well appreciated. We examined this issue in detail based on large-scale exome data and robust simulations. Our investigation revealed that sample size appreciably influences θ estimation and this effect was much higher for constrained genomic regions than that of neutral regions. For instance, θ estimated for synonymous sites using 512 human exomes was 1.9 times higher than that obtained using 16 exomes. However, this difference was 2.5 times for the nonsynonymous sites of the same data. We observed a positive correlation between the rate of increase in θ estimates (with respect to the sample size) and the magnitude of selection pressure. For example, θ estimated for the nonsynonymous sites of highly constrained genes (dN/dS < 0.1) using 512 exomes was 3.6 times higher than that estimated using 16 exomes. In contrast this difference was only 2 times for the less constrained genes (dN/dS > 0.9). The results of this study reveal the extent of underestimation owing to small sample sizes and thus emphasize the importance of sample size in estimating a number of population genomic parameters. Our results have serious implications for neutrality tests such as Tajima D, Fu-Li D and those based on the McDonald and Kreitman test: Neutrality Index and the fraction of adaptive substitutions. For instance, use of 16 exomes produced 2.4 times higher proportion of adaptive substitutions compared to that obtained using 512 exomes (24% vs 10 %).

  11. Sample size and number of outcome measures of veterinary randomised controlled trials of pharmaceutical interventions funded by different sources, a cross-sectional study.

    PubMed

    Wareham, K J; Hyde, R M; Grindlay, D; Brennan, M L; Dean, R S

    2017-10-04

    Randomised controlled trials (RCTs) are a key component of the veterinary evidence base. Sample sizes and defined outcome measures are crucial components of RCTs. To describe the sample size and number of outcome measures of veterinary RCTs either funded by the pharmaceutical industry or not, published in 2011. A structured search of PubMed identified RCTs examining the efficacy of pharmaceutical interventions. Number of outcome measures, number of animals enrolled per trial, whether a primary outcome was identified, and the presence of a sample size calculation were extracted from the RCTs. The source of funding was identified for each trial and groups compared on the above parameters. Literature searches returned 972 papers; 86 papers comprising 126 individual trials were analysed. The median number of outcomes per trial was 5.0; there were no significant differences across funding groups (p = 0.133). The median number of animals enrolled per trial was 30.0; this was similar across funding groups (p = 0.302). A primary outcome was identified in 40.5% of trials and was significantly more likely to be stated in trials funded by a pharmaceutical company. A very low percentage of trials reported a sample size calculation (14.3%). Failure to report primary outcomes, justify sample sizes and the reporting of multiple outcome measures was a common feature in all of the clinical trials examined in this study. It is possible some of these factors may be affected by the source of funding of the studies, but the influence of funding needs to be explored with a larger number of trials. Some veterinary RCTs provide a weak evidence base and targeted strategies are required to improve the quality of veterinary RCTs to ensure there is reliable evidence on which to base clinical decisions.

  12. The large sample size fallacy.

    PubMed

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  13. Evaluation of sampling plans to detect Cry9C protein in corn flour and meal.

    PubMed

    Whitaker, Thomas B; Trucksess, Mary W; Giesbrecht, Francis G; Slate, Andrew B; Thomas, Francis S

    2004-01-01

    StarLink is a genetically modified corn that produces an insecticidal protein, Cry9C. Studies were conducted to determine the variability and Cry9C distribution among sample test results when Cry9C protein was estimated in a bulk lot of corn flour and meal. Emphasis was placed on measuring sampling and analytical variances associated with each step of the test procedure used to measure Cry9C in corn flour and meal. Two commercially available enzyme-linked immunosorbent assay kits were used: one for the determination of Cry9C protein concentration and the other for % StarLink seed. The sampling and analytical variances associated with each step of the Cry9C test procedures were determined for flour and meal. Variances were found to be functions of Cry9C concentration, and regression equations were developed to describe the relationships. Because of the larger particle size, sampling variability associated with cornmeal was about double that for corn flour. For cornmeal, the sampling variance accounted for 92.6% of the total testing variability. The observed sampling and analytical distributions were compared with the Normal distribution. In almost all comparisons, the null hypothesis that the Cry9C protein values were sampled from a Normal distribution could not be rejected at 95% confidence limits. The Normal distribution and the variance estimates were used to evaluate the performance of several Cry9C protein sampling plans for corn flour and meal. Operating characteristic curves were developed and used to demonstrate the effect of increasing sample size on reducing false positives (seller's risk) and false negatives (buyer's risk).

  14. Evaluation of the carcinogenic risks at the influence of POPs.

    PubMed

    Nazhmetdinova, Aiman; Kassymbayev, Adlet; Chalginbayeva, Altinay

    2017-12-20

    Kazakhstan is included in the list of environmentally vulnerable countries and Kyzylorda oblast in particular. This is due to its geographical, spatial and temporal and socioeconomic features. As part of the program "Integrated approaches in the management of public health in the Aral region", we have carried out an expertise on many samples of natural environments and products. Samples were selected in accordance with sampling procedures according to regulatory documents by specialists of the Pesticide Toxicology Laboratory. It is accredited by the State Standard of the Republic of Kazakhstan, for compliance with ST RK ISO/IEC 17025-2007 "General requirements for the competence of test and calibration laboratories". Gas chromatograph was used for the determination of residues of organochlorine pesticides. For the determination of dioxins, polychlorinated biphenyl was conducted on the gas chromatomass spectrometer with quadruple detector produce by Agilent Company, USA. To assess the risk, we carried out the mathematical calculations according to the risk of chemicals polluting (No P 2.1.10.1920-04, Russia). Calculation of the carcinogenic risk was carried out with the use of data on the size of the exposure and meanings of carcinogenic potential factors (slope factor and unit risk). The evaluation of persistent organic pollutants (POPs), based on the previous results of the research concerning water, soil and food products, was held in five population settlements in Kyzylorda oblast villages: Ayteke bi, Zhalagash, Zhosaly, Shieli and Aralsk town. Pollution with the POPs in the environmental objects by means of exposition and evaluation of the carcinogenic risk to human health is confirmed by the data of the statistical reporting about some morbidity in Kyzylorda oblast, such as skin diseases and subcutaneous tissue, endocrine system diseases, pregnancy complications etc. The received levels of carcinogenic risks, which were first carried out in the Republic of Kazakhstan in the village of Shieli, meet the third-risk range, which is not acceptable to the life of the population that again shows the problem of the Aral Sea, called the zone of ecological disaster.

  15. Applying Precision Medicine to Trial Design Using Physiology. Extracorporeal CO2 Removal for Acute Respiratory Distress Syndrome.

    PubMed

    Goligher, Ewan C; Amato, Marcelo B P; Slutsky, Arthur S

    2017-09-01

    In clinical trials of therapies for acute respiratory distress syndrome (ARDS), the average treatment effect in the study population may be attenuated because individual patient responses vary widely. This inflates sample size requirements and increases the cost and difficulty of conducting successful clinical trials. One solution is to enrich the study population with patients most likely to benefit, based on predicted patient response to treatment (predictive enrichment). In this perspective, we apply the precision medicine paradigm to the emerging use of extracorporeal CO 2 removal (ECCO 2 R) for ultraprotective ventilation in ARDS. ECCO 2 R enables reductions in tidal volume and driving pressure, key determinants of ventilator-induced lung injury. Using basic physiological concepts, we demonstrate that dead space and static compliance determine the effect of ECCO 2 R on driving pressure and mechanical power. This framework might enable prediction of individual treatment responses to ECCO 2 R. Enriching clinical trials by selectively enrolling patients with a significant predicted treatment response can increase treatment effect size and statistical power more efficiently than conventional enrichment strategies that restrict enrollment according to the baseline risk of death. To support this claim, we simulated the predicted effect of ECCO 2 R on driving pressure and mortality in a preexisting cohort of patients with ARDS. Our computations suggest that restricting enrollment to patients in whom ECCO 2 R allows driving pressure to be decreased by 5 cm H 2 O or more can reduce sample size requirement by more than 50% without increasing the total number of patients to be screened. We discuss potential implications for trial design based on this framework.

  16. Quantification of fetal and total circulatory DNA in maternal plasma samples before and after size fractionation by agarose gel electrophoresis.

    PubMed

    Hromadnikova, I; Zejskova, L; Doucha, J; Codl, D

    2006-11-01

    Fetal extracellular DNA is mainly derived from apoptotic bodies of trophoblast. Recent studies have shown size differences between fetal and maternal extracellular DNA. We have examined the quantification of fetal (SRY gene) and total (GLO gene) extracellular DNA in maternal plasma in different fractions (100-300, 300-500, 500-700, 700-900, and >900 bp) after size fractionation by agarose gel electrophoresis. DNA was extracted from maternal plasma samples from 11 pregnant women carrying male foetuses at the 16th week of gestation. Fetal circulatory DNA was mainly detected in the 100-300 bp fraction with the median concentration being 14.4 GE/ml. A lower median amount of 4.9 GE/ml was also found in the 300-500 bp fraction. Circulatory DNA extracted from the 100-300 bp fraction contained 4.2 times enriched fetal DNA when compared with unseparated DNA sample. Fetal DNA within the 300-500 bp fraction was 2.5 times enriched. Circulatory fetal DNA is predominantly present in a fraction with molecular size <500 bp, which can be used for the detection of paternally inherited alleles. However, the usage of size-separated DNA is not suitable for routine clinical applications because of risk of contamination.

  17. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    PubMed

    Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H

    2017-01-01

    In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  18. Freeze-cast alumina pore networks: Effects of freezing conditions and dispersion medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, S. M.; Xiao, X.; Faber, K. T.

    Alumina ceramics were freeze-cast from water- and camphene-based slurries under varying freezing conditions and examined using X-ray computed tomography (XCT). Pore network characteristics, i.e., porosity, pore size, geometric surface area, and tortuosity, were measured from XCT reconstructions and the data were used to develop a model to predict feature size from processing conditions. Classical solidification theory was used to examine relationships between pore size, temperature gradients, and freezing front velocity. Freezing front velocity was subsequently predicted from casting conditions via the two-phase Stefan problem. Resulting models for water-based samples agreed with solidification-based theories predicting lamellar spacing of binary eutectic alloys,more » and models for camphene-based samples concurred with those for dendritic growth. Relationships between freezing conditions and geometric surface area were also modeled by considering the inverse relationship between pore size and surface area. Tortuosity was determined to be dependent primarily on the type of dispersion medium. (C) 2015 Elsevier Ltd. All rights reserved.« less

  19. Accounting for Incomplete Species Detection in Fish Community Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta

    2013-01-01

    Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less

  20. Quantitative assessment of medical waste generation in the capital city of Bangladesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patwary, Masum A.; O'Hare, William Thomas; Street, Graham

    2009-08-15

    There is a concern that mismanagement of medical waste in developing countries may be a significant risk factor for disease transmission. Quantitative estimation of medical waste generation is needed to estimate the potential risk and as a basis for any waste management plan. Dhaka City, the capital of Bangladesh, is an example of a major city in a developing country where there has been no rigorous estimation of medical waste generation based upon a thorough scientific study. These estimates were obtained by stringent weighing of waste in a carefully chosen, representative, sample of HCEs, including non-residential diagnostic centres. This studymore » used a statistically designed sampling of waste generation in a broad range of Health Care Establishments (HCEs) to indicate that the amount of waste produced in Dhaka can be estimated to be 37 {+-} 5 ton per day. The proportion of this waste that would be classified as hazardous waste by World Health Organisation (WHO) guidelines was found to be approximately 21%. The amount of waste, and the proportion of hazardous waste, was found to vary significantly with the size and type of HCE.« less

  1. Development of a copula-based particle filter (CopPF) approach for hydrologic data assimilation under consideration of parameter interdependence

    NASA Astrophysics Data System (ADS)

    Fan, Y. R.; Huang, G. H.; Baetz, B. W.; Li, Y. P.; Huang, K.

    2017-06-01

    In this study, a copula-based particle filter (CopPF) approach was developed for sequential hydrological data assimilation by considering parameter correlation structures. In CopPF, multivariate copulas are proposed to reflect parameter interdependence before the resampling procedure with new particles then being sampled from the obtained copulas. Such a process can overcome both particle degeneration and sample impoverishment. The applicability of CopPF is illustrated with three case studies using a two-parameter simplified model and two conceptual hydrologic models. The results for the simplified model indicate that model parameters are highly correlated in the data assimilation process, suggesting a demand for full description of their dependence structure. Synthetic experiments on hydrologic data assimilation indicate that CopPF can rejuvenate particle evolution in large spaces and thus achieve good performances with low sample size scenarios. The applicability of CopPF is further illustrated through two real-case studies. It is shown that, compared with traditional particle filter (PF) and particle Markov chain Monte Carlo (PMCMC) approaches, the proposed method can provide more accurate results for both deterministic and probabilistic prediction with a sample size of 100. Furthermore, the sample size would not significantly influence the performance of CopPF. Also, the copula resampling approach dominates parameter evolution in CopPF, with more than 50% of particles sampled by copulas in most sample size scenarios.

  2. Optimal number of features as a function of sample size for various classification rules.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Lowey, James; Suh, Edward; Dougherty, Edward R

    2005-04-15

    Given the joint feature-label distribution, increasing the number of features always results in decreased classification error; however, this is not the case when a classifier is designed via a classification rule from sample data. Typically (but not always), for fixed sample size, the error of a designed classifier decreases and then increases as the number of features grows. The potential downside of using too many features is most critical for small samples, which are commonplace for gene-expression-based classifiers for phenotype discrimination. For fixed sample size and feature-label distribution, the issue is to find an optimal number of features. Since only in rare cases is there a known distribution of the error as a function of the number of features and sample size, this study employs simulation for various feature-label distributions and classification rules, and across a wide range of sample and feature-set sizes. To achieve the desired end, finding the optimal number of features as a function of sample size, it employs massively parallel computation. Seven classifiers are treated: 3-nearest-neighbor, Gaussian kernel, linear support vector machine, polynomial support vector machine, perceptron, regular histogram and linear discriminant analysis. Three Gaussian-based models are considered: linear, nonlinear and bimodal. In addition, real patient data from a large breast-cancer study is considered. To mitigate the combinatorial search for finding optimal feature sets, and to model the situation in which subsets of genes are co-regulated and correlation is internal to these subsets, we assume that the covariance matrix of the features is blocked, with each block corresponding to a group of correlated features. Altogether there are a large number of error surfaces for the many cases. These are provided in full on a companion website, which is meant to serve as resource for those working with small-sample classification. For the companion website, please visit http://public.tgen.org/tamu/ofs/ e-dougherty@ee.tamu.edu.

  3. Classification of spray nozzles based on droplet size distributions and wind tunnel tests.

    PubMed

    De Schamphelerie, M; Spanoghe, P; Nuyttens, D; Baetens, K; Cornelis, W; Gabriels, D; Van der Meeren, P

    2006-01-01

    Droplet size distribution of a pesticide spray is recognised as a main factor affecting spray drift. As a first approximation, nozzles can be classified based on their droplet size spectrum. However, the risk of drift for a given droplet size distribution is also a function of spray structure, droplet velocities and entrained air conditions. Wind tunnel tests to determine actual drift potentials of the different nozzles have been proposed as a method of adding an indication of the risk of spray drift to the existing classification based on droplet size distributions (Miller et al, 1995). In this research wind tunnel tests were performed in the wind tunnel of the International Centre for Eremology (I.C.E.), Ghent University, to determine the drift potential of different types and sizes of nozzles at various spray pressures. Flat Fan (F) nozzles Hardi ISO 110 02, 110 03, 110 04, 110 06; Low-Drift (LD) nozzles Hardi ISO 110 02, 110 03, 110 04 and Injet Air Inclusion (AI) nozzles Hardi ISO 110 02, 110 03, 110 04 were tested at a spray pressures of 2, 3 and 4 bar. The droplet size spectra of the F and the LD nozzles were measured with a Malvern Mastersizer at spray pressures 2 bar, 3 bar and 4 bar. The Malvern spectra were used to calculate the Volume Median Diameters (VMD) of the sprays.

  4. Development and Validation of a Risk Score for Age-Related Macular Degeneration: The STARS Questionnaire.

    PubMed

    Delcourt, Cécile; Souied, Eric; Sanchez, Alice; Bandello, Francesco

    2017-12-01

    To develop and validate a risk score for AMD based on a simple self-administered questionnaire. Risk factors having shown the most consistent associations with AMD were included in the STARS (Simplified Théa AMD Risk-Assessment Scale) questionnaire. Two studies were conducted, one in Italy (127 participating ophthalmologists) and one in France (80 participating ophthalmologists). During 1 week, participating ophthalmologists invited all their patients aged 55 years or older to fill in the STARS questionnaire. Based on fundus examination, early AMD was defined by the presence of soft drusen and/or pigmentary abnormalities and late AMD by the presence of geographic atrophy and/or neovascular AMD. The Italian and French samples consisted of 12,639 and 6897 patients, respectively. All 13 risk factors included in the STARS questionnaire showed significant associations with AMD in the Italian sample. The area under the receiving operating characteristic curve for the STARS risk score, derived from the multivariate logistic regression in the Italian sample, was 0.78 in the Italian sample and 0.72 in the French sample. In both samples, less than 10% of patients without AMD were classified at high risk, and less than 13% of late AMD cases were classified as low risk, with a more intermediate situation in early AMD cases. STARS is a new, simple self-assessed questionnaire showing good discrimination of risk for AMD in two large European samples. It might be used by ophthalmologists in routine clinical practice or as a self-assessment for risk of AMD in the general population.

  5. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  6. Revision of Viable Environmental Monitoring in a Development Pilot Plant Based on Quality Risk Assessment: A Case Study.

    PubMed

    Ziegler, Ildikó; Borbély-Jakab, Judit; Sugó, Lilla; Kovács, Réka J

    2017-01-01

    In this case study, the principles of quality risk management were applied to review sampling points and monitoring frequencies in the hormonal tableting unit of a formulation development pilot plant. In the cleanroom area, premises of different functions are located. Therefore a general method was established for risk evaluation based on the Hazard Analysis and Critical Control Points (HACCP) method to evaluate these premises (i.e., production area itself and ancillary clean areas) from the point of view of microbial load and state in order to observe whether the existing monitoring program met the emerged advanced monitoring practice. LAY ABSTRACT: In pharmaceutical production, cleanrooms are needed for the manufacturing of final dosage forms of drugs-intended for human or veterinary use-in order to protect the patient's weakened body from further infections. Cleanrooms are premises with a controlled level of contamination that is specified by the number of particles per cubic meter at a specified particle size or number of microorganisms (i.e. microbial count) per surface area. To ensure a low microbial count over time, microorganisms are detected and counted by environmental monitoring methods regularly. It is reasonable to find the easily infected places by risk analysis to make sure the obtained results really represent the state of the whole room. This paper presents a risk analysis method for the optimization of environmental monitoring and verification of the suitability of the method. © PDA, Inc. 2017.

  7. [The relationship between adolescent body size and health promoting behavior and biochemical indicator factors].

    PubMed

    Chen, Hsiu-Chih; Chen, Hsing-Mei; Chen, Min-Li; Chiang, Chih-Ming; Chen, Mei-Yen

    2012-06-01

    Tainan City has the third highest prevalence of junior high school student obesity of all administrative districts in Taiwan. School nurses play an important role in promoting student health. Understanding the factors that significantly impact student weight is critical to designing effective student health promotion programs. This study explored the relationships between health promotion behavior and serum biomarker variables and body size. Researchers used a cross-sectional descriptive study design and stratified cluster random sampling. Subjects were 7th graders who received an in-school health checkup with blood test at 41 public junior high schools in Tainan City between July 2010 and May 2011. Research instruments included the adolescent health promotion (AHP) scale, serum biochemical profile and BMI (body mass index). Obtained data were analyzed using descriptive and inferential statistics. Of the 726 students who participated in this study, 22.2% were underweight and 23.8% were overweight or obese. Higher AHP scores correlated with better biomarkers and body size. Multivariate analysis found factors that increased the risk of being overweight included: being male, having a father with a relatively low level of education, playing video games frequently, and doing little or no exercise (odds ratio = 1.93, 1.75, 1.07, 1.04, respectively). Participants with relatively healthy behaviors had better biomarkers and a lower risk of being overweight. Findings can support the development of evidence-based school programs to promote student health.

  8. Particle Size Distribution of Heavy Metals and Magnetic Susceptibility in an Industrial Site.

    PubMed

    Ayoubi, Shamsollah; Soltani, Zeynab; Khademi, Hossein

    2018-05-01

    This study was conducted to explore the relationships between magnetic susceptibility and some soil heavy metals concentrations in various particle sizes in an industrial site, central Iran. Soils were partitioned into five fractions (< 28, 28-75, 75-150, 150-300, and 300-2000 µm). Heavy metals concentrations including Zn, Pb, Fe, Cu, Ni and Mn and magnetic susceptibility were determined in bulk soil samples and all fractions in 60 soil samples collected from the depth of 0-5 cm. The studied heavy metals except for Pb and Fe displayed a substantial enrichment in the < 28 µm. These two elements seemed to be independent of the selected size fractions. Magnetic minerals are specially linked with medium size fractions including 28-75, 75-150 and 150-300 µm. The highest correlations were found for < 28 µm and heavy metals followed by 150-300 µm fraction which are susceptible to wind erosion risk in an arid environment.

  9. Examining the Efficacy of HIV Risk-Reduction Counseling on the Sexual Risk Behaviors of a National Sample of Drug Abuse Treatment Clients: Analysis of Subgroups.

    PubMed

    Gooden, Lauren; Metsch, Lisa R; Pereyra, Margaret R; Malotte, C Kevin; Haynes, Louise F; Douaihy, Antoine; Chally, Jack; Mandler, Raul N; Feaster, Daniel J

    2016-09-01

    HIV counseling with testing has been part of HIV prevention in the U.S. since the 1980s. Despite the long-standing history of HIV testing with prevention counseling, the CDC released HIV testing recommendations for health care settings contesting benefits of prevention counseling with testing in reducing sexual risk behaviors among HIV-negatives in 2006. Efficacy of brief HIV risk-reduction counseling (RRC) in decreasing sexual risk among subgroups of substance use treatment clients was examined using multi-site RCT data. Interaction tests between RRC and subgroups were performed; multivariable regression evaluated the relationship between RRC (with rapid testing) and sex risk. Subgroups were defined by demographics, risk type and level, attitudes/perceptions, and behavioral history. There was an effect (p < .0028) of counseling on number of sex partners among some subgroups. Certain subgroups may benefit from HIV RRC; this should be examined in studies with larger sample sizes, designed to assess the specific subgroup(s).

  10. A Fracture Mechanics Approach to Thermal Shock Investigation in Alumina-Based Refractory

    NASA Astrophysics Data System (ADS)

    Volkov-Husović, T.; Heinemann, R. Jančić; Mitraković, D.

    2008-02-01

    The thermal shock behavior of large grain size, alumina-based refractories was investigated experimentally using a standard water quench test. A mathematical model was employed to simulate the thermal stability behavior. Behavior of the samples under repeated thermal shock was monitored using ultrasonic measurements of dynamic Young's modulus. Image analysis was used to observe the extent of surface degradation. Analysis of the obtained results for the behavior of large grain size samples under conditions of rapid temperature changes is given.

  11. Size selective isocyanate aerosols personal air sampling using porous plastic foams

    NASA Astrophysics Data System (ADS)

    Khanh Huynh, Cong; Duc, Trinh Vu

    2009-02-01

    As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.

  12. Innovative Recruitment Using Online Networks: Lessons Learned From an Online Study of Alcohol and Other Drug Use Utilizing a Web-Based, Respondent-Driven Sampling (webRDS) Strategy

    PubMed Central

    Bauermeister, José A.; Zimmerman, Marc A.; Johns, Michelle M.; Glowacki, Pietreck; Stoddard, Sarah; Volz, Erik

    2012-01-01

    Objective: We used a web version of Respondent-Driven Sampling (webRDS) to recruit a sample of young adults (ages 18–24) and examined whether this strategy would result in alcohol and other drug (AOD) prevalence estimates comparable to national estimates (National Survey on Drug Use and Health [NSDUH]). Method: We recruited 22 initial participants (seeds) via Facebook to complete a web survey examining AOD risk correlates. Sequential, incentivized recruitment continued until our desired sample size was achieved. After correcting for webRDS clustering effects, we contrasted our AOD prevalence estimates (past 30 days) to NSDUH estimates by comparing the 95% confidence intervals of prevalence estimates. Results: We found comparable AOD prevalence estimates between our sample and NSDUH for the past 30 days for alcohol, marijuana, cocaine, Ecstasy (3,4-methylenedioxymethamphetamine, or MDMA), and hallucinogens. Cigarette use was lower than NSDUH estimates. Conclusions: WebRDS may be a suitable strategy to recruit young adults online. We discuss the unique strengths and challenges that may be encountered by public health researchers using webRDS methods. PMID:22846248

  13. A genome-wide scan for common alleles affecting risk for autism

    PubMed Central

    Anney, Richard; Klei, Lambertus; Pinto, Dalila; Regan, Regina; Conroy, Judith; Magalhaes, Tiago R.; Correia, Catarina; Abrahams, Brett S.; Sykes, Nuala; Pagnamenta, Alistair T.; Almeida, Joana; Bacchelli, Elena; Bailey, Anthony J.; Baird, Gillian; Battaglia, Agatino; Berney, Tom; Bolshakova, Nadia; Bölte, Sven; Bolton, Patrick F.; Bourgeron, Thomas; Brennan, Sean; Brian, Jessica; Carson, Andrew R.; Casallo, Guillermo; Casey, Jillian; Chu, Su H.; Cochrane, Lynne; Corsello, Christina; Crawford, Emily L.; Crossett, Andrew; Dawson, Geraldine; de Jonge, Maretha; Delorme, Richard; Drmic, Irene; Duketis, Eftichia; Duque, Frederico; Estes, Annette; Farrar, Penny; Fernandez, Bridget A.; Folstein, Susan E.; Fombonne, Eric; Freitag, Christine M.; Gilbert, John; Gillberg, Christopher; Glessner, Joseph T.; Goldberg, Jeremy; Green, Jonathan; Guter, Stephen J.; Hakonarson, Hakon; Heron, Elizabeth A.; Hill, Matthew; Holt, Richard; Howe, Jennifer L.; Hughes, Gillian; Hus, Vanessa; Igliozzi, Roberta; Kim, Cecilia; Klauck, Sabine M.; Kolevzon, Alexander; Korvatska, Olena; Kustanovich, Vlad; Lajonchere, Clara M.; Lamb, Janine A.; Laskawiec, Magdalena; Leboyer, Marion; Le Couteur, Ann; Leventhal, Bennett L.; Lionel, Anath C.; Liu, Xiao-Qing; Lord, Catherine; Lotspeich, Linda; Lund, Sabata C.; Maestrini, Elena; Mahoney, William; Mantoulan, Carine; Marshall, Christian R.; McConachie, Helen; McDougle, Christopher J.; McGrath, Jane; McMahon, William M.; Melhem, Nadine M.; Merikangas, Alison; Migita, Ohsuke; Minshew, Nancy J.; Mirza, Ghazala K.; Munson, Jeff; Nelson, Stanley F.; Noakes, Carolyn; Noor, Abdul; Nygren, Gudrun; Oliveira, Guiomar; Papanikolaou, Katerina; Parr, Jeremy R.; Parrini, Barbara; Paton, Tara; Pickles, Andrew; Piven, Joseph; Posey, David J; Poustka, Annemarie; Poustka, Fritz; Prasad, Aparna; Ragoussis, Jiannis; Renshaw, Katy; Rickaby, Jessica; Roberts, Wendy; Roeder, Kathryn; Roge, Bernadette; Rutter, Michael L.; Bierut, Laura J.; Rice, John P.; Salt, Jeff; Sansom, Katherine; Sato, Daisuke; Segurado, Ricardo; Senman, Lili; Shah, Naisha; Sheffield, Val C.; Soorya, Latha; Sousa, Inês; Stoppioni, Vera; Strawbridge, Christina; Tancredi, Raffaella; Tansey, Katherine; Thiruvahindrapduram, Bhooma; Thompson, Ann P.; Thomson, Susanne; Tryfon, Ana; Tsiantis, John; Van Engeland, Herman; Vincent, John B.; Volkmar, Fred; Wallace, Simon; Wang, Kai; Wang, Zhouzhi; Wassink, Thomas H.; Wing, Kirsty; Wittemeyer, Kerstin; Wood, Shawn; Yaspan, Brian L.; Zurawiecki, Danielle; Zwaigenbaum, Lonnie; Betancur, Catalina; Buxbaum, Joseph D.; Cantor, Rita M.; Cook, Edwin H.; Coon, Hilary; Cuccaro, Michael L.; Gallagher, Louise; Geschwind, Daniel H.; Gill, Michael; Haines, Jonathan L.; Miller, Judith; Monaco, Anthony P.; Nurnberger, John I.; Paterson, Andrew D.; Pericak-Vance, Margaret A.; Schellenberg, Gerard D.; Scherer, Stephen W.; Sutcliffe, James S.; Szatmari, Peter; Vicente, Astrid M.; Vieland, Veronica J.; Wijsman, Ellen M.; Devlin, Bernie; Ennis, Sean; Hallmayer, Joachim

    2010-01-01

    Although autism spectrum disorders (ASDs) have a substantial genetic basis, most of the known genetic risk has been traced to rare variants, principally copy number variants (CNVs). To identify common risk variation, the Autism Genome Project (AGP) Consortium genotyped 1558 rigorously defined ASD families for 1 million single-nucleotide polymorphisms (SNPs) and analyzed these SNP genotypes for association with ASD. In one of four primary association analyses, the association signal for marker rs4141463, located within MACROD2, crossed the genome-wide association significance threshold of P < 5 × 10−8. When a smaller replication sample was analyzed, the risk allele at rs4141463 was again over-transmitted; yet, consistent with the winner's curse, its effect size in the replication sample was much smaller; and, for the combined samples, the association signal barely fell below the P < 5 × 10−8 threshold. Exploratory analyses of phenotypic subtypes yielded no significant associations after correction for multiple testing. They did, however, yield strong signals within several genes, KIAA0564, PLD5, POU6F2, ST8SIA2 and TAF1C. PMID:20663923

  14. A genome-wide scan for common alleles affecting risk for autism.

    PubMed

    Anney, Richard; Klei, Lambertus; Pinto, Dalila; Regan, Regina; Conroy, Judith; Magalhaes, Tiago R; Correia, Catarina; Abrahams, Brett S; Sykes, Nuala; Pagnamenta, Alistair T; Almeida, Joana; Bacchelli, Elena; Bailey, Anthony J; Baird, Gillian; Battaglia, Agatino; Berney, Tom; Bolshakova, Nadia; Bölte, Sven; Bolton, Patrick F; Bourgeron, Thomas; Brennan, Sean; Brian, Jessica; Carson, Andrew R; Casallo, Guillermo; Casey, Jillian; Chu, Su H; Cochrane, Lynne; Corsello, Christina; Crawford, Emily L; Crossett, Andrew; Dawson, Geraldine; de Jonge, Maretha; Delorme, Richard; Drmic, Irene; Duketis, Eftichia; Duque, Frederico; Estes, Annette; Farrar, Penny; Fernandez, Bridget A; Folstein, Susan E; Fombonne, Eric; Freitag, Christine M; Gilbert, John; Gillberg, Christopher; Glessner, Joseph T; Goldberg, Jeremy; Green, Jonathan; Guter, Stephen J; Hakonarson, Hakon; Heron, Elizabeth A; Hill, Matthew; Holt, Richard; Howe, Jennifer L; Hughes, Gillian; Hus, Vanessa; Igliozzi, Roberta; Kim, Cecilia; Klauck, Sabine M; Kolevzon, Alexander; Korvatska, Olena; Kustanovich, Vlad; Lajonchere, Clara M; Lamb, Janine A; Laskawiec, Magdalena; Leboyer, Marion; Le Couteur, Ann; Leventhal, Bennett L; Lionel, Anath C; Liu, Xiao-Qing; Lord, Catherine; Lotspeich, Linda; Lund, Sabata C; Maestrini, Elena; Mahoney, William; Mantoulan, Carine; Marshall, Christian R; McConachie, Helen; McDougle, Christopher J; McGrath, Jane; McMahon, William M; Melhem, Nadine M; Merikangas, Alison; Migita, Ohsuke; Minshew, Nancy J; Mirza, Ghazala K; Munson, Jeff; Nelson, Stanley F; Noakes, Carolyn; Noor, Abdul; Nygren, Gudrun; Oliveira, Guiomar; Papanikolaou, Katerina; Parr, Jeremy R; Parrini, Barbara; Paton, Tara; Pickles, Andrew; Piven, Joseph; Posey, David J; Poustka, Annemarie; Poustka, Fritz; Prasad, Aparna; Ragoussis, Jiannis; Renshaw, Katy; Rickaby, Jessica; Roberts, Wendy; Roeder, Kathryn; Roge, Bernadette; Rutter, Michael L; Bierut, Laura J; Rice, John P; Salt, Jeff; Sansom, Katherine; Sato, Daisuke; Segurado, Ricardo; Senman, Lili; Shah, Naisha; Sheffield, Val C; Soorya, Latha; Sousa, Inês; Stoppioni, Vera; Strawbridge, Christina; Tancredi, Raffaella; Tansey, Katherine; Thiruvahindrapduram, Bhooma; Thompson, Ann P; Thomson, Susanne; Tryfon, Ana; Tsiantis, John; Van Engeland, Herman; Vincent, John B; Volkmar, Fred; Wallace, Simon; Wang, Kai; Wang, Zhouzhi; Wassink, Thomas H; Wing, Kirsty; Wittemeyer, Kerstin; Wood, Shawn; Yaspan, Brian L; Zurawiecki, Danielle; Zwaigenbaum, Lonnie; Betancur, Catalina; Buxbaum, Joseph D; Cantor, Rita M; Cook, Edwin H; Coon, Hilary; Cuccaro, Michael L; Gallagher, Louise; Geschwind, Daniel H; Gill, Michael; Haines, Jonathan L; Miller, Judith; Monaco, Anthony P; Nurnberger, John I; Paterson, Andrew D; Pericak-Vance, Margaret A; Schellenberg, Gerard D; Scherer, Stephen W; Sutcliffe, James S; Szatmari, Peter; Vicente, Astrid M; Vieland, Veronica J; Wijsman, Ellen M; Devlin, Bernie; Ennis, Sean; Hallmayer, Joachim

    2010-10-15

    Although autism spectrum disorders (ASDs) have a substantial genetic basis, most of the known genetic risk has been traced to rare variants, principally copy number variants (CNVs). To identify common risk variation, the Autism Genome Project (AGP) Consortium genotyped 1558 rigorously defined ASD families for 1 million single-nucleotide polymorphisms (SNPs) and analyzed these SNP genotypes for association with ASD. In one of four primary association analyses, the association signal for marker rs4141463, located within MACROD2, crossed the genome-wide association significance threshold of P < 5 × 10(-8). When a smaller replication sample was analyzed, the risk allele at rs4141463 was again over-transmitted; yet, consistent with the winner's curse, its effect size in the replication sample was much smaller; and, for the combined samples, the association signal barely fell below the P < 5 × 10(-8) threshold. Exploratory analyses of phenotypic subtypes yielded no significant associations after correction for multiple testing. They did, however, yield strong signals within several genes, KIAA0564, PLD5, POU6F2, ST8SIA2 and TAF1C.

  15. Health risk evaluation of heavy metals in green land soils from urban parks in Urumqi, northwest China.

    PubMed

    Zhaoyong, Zhang; Xiaodong, Yang; Simay, Zibibula; Mohammed, Anwar

    2018-02-01

    Here, we sampled, tested, and analyzed heavy metals in soil obtained from green land in urban parks of Urumqi. Analysis included soil nutrient contents, particle size distribution, and health risks of heavy metal contaminants. Results showed that (1) organic matter and rapidly available phosphorus contents of all samples ranged from 6.07-58.34 and 6.52-116.15 mg/kg, with average values of 31.26 and 36.24 mg/kg, respectively; (2) silt (particle size 20-200 μm) comprised most of the particle distribution, accounting for 46.56-87.38% of the total, and the remaining particles were clay particles (0-20 μm) and sand (200-2000 μm); (3) calculations of HQ ing , HQ inh , and HQ derm for eight heavy metals in three exposure patterns revealed values less than 1 for children and adults, indicating a level of carcinogenic risk for these heavy metals; and (4) calculating the carcinogenic risks of nickel, chromium, and cadmium through breathing pathway indicating no potential carcinogenic risk for any of the three. This research showed high soil nutrient content, providing fertile ground for plant growth in the green land of these urban parks. However, measures such as using sprinklers and increased green vegetation areas have been proposed to improve soil texture. This research can serve as a reference point for soil environmental protection efforts as well as future plant growth in urban Urumqi parks.

  16. Prevalence of polycystic ovary syndrome and its associated complications in Iranian women: A meta-analysis.

    PubMed

    Jalilian, Anahita; Kiani, Faezeh; Sayehmiri, Fatemeh; Sayehmiri, Kourosh; Khodaee, Zahra; Akbari, Malihe

    2015-10-01

    Polycystic ovary syndrome (PCOS) is the most common endocrine disorder in women of reproductive age and is the most common cause of infertility due to anovulation. There is no single criterion for the diagnosis of this syndrome. The purpose of this study was to investigate the prevalence of PCOS and its associated complications in Iranian women using meta-analysis method. Prevalence of PCOS was investigated from the SID, Goggle scholar, PubMed, Magiran, Irandoc, and Iranmedex, and weighting of each study was calculated according to sample size and prevalence of the binomial distribution. Data were analyzed using a random-effects model meta-analysis (Random effects model) and the software R and Stata Version 11.2. 30 studies conducted between the years 2006 to 2011 were entered into meta-analysis. The total sample size was 19, 226 women aged between 10-45 years. The prevalence of PCOS based on National institute of child health and human disease of the U.S was, 6.8% (95 % CI: 4.11-8.5), based on Rotterdam was 19.5% (95 % CI: 2.24-8.14), and based on ultrasound was 4.41% (95% CI: 5.68-4.14). Also, the prevalence of hirsutism was estimated to be 13%, acne 26%, androgenic alopecia 9%, menstrual disorders 28%, overweight 21%, obesity 19%, and infertility 8%. The prevalence of PCOS in Iran is not high. However, given the risk of complications such as heart disease - cardiovascular and infertility, prevention of PCOS is important; we suggest that health officials must submit plans for the community in this respect.

  17. Association analyses of vitamin D-binding protein gene with compression strength index variation in Caucasian nuclear families.

    PubMed

    Xu, X-H; Xiong, D-H; Liu, X-G; Guo, Y; Chen, Y; Zhao, J; Recker, R R; Deng, H-W

    2010-01-01

    This study was conducted to test whether there exists an association between vitamin D-binding protein (DBP) gene and compression strength index (CSI) phenotype. Candidate gene association analyses were conducted in total sample, male subgroup, and female subgroup, respectively. Two single-nucleotide polymorphisms (SNPs) with significant association results were found in males, suggesting the importance of DBP gene polymorphisms on the variation in CSI especially in Caucasian males. CSI of the femoral neck (FN) is a newly developed phenotype integrating information about bone size, body size, and bone mineral density. It is considered to have the potential to improve the performance of risk assessment for hip fractures because it is based on a combination of phenotypic traits influencing hip fractures rather than a single trait. CSI is under moderate genetic determination (with a heritability of approximately 44% found in this study), but the relevant genetic study is still rather scarce. Based on the known physiological role of DBP in bone biology and the relatively high heritability of CSI, we tested 12 SNPs of the DBP gene for association with CSI variation in 405 Caucasian nuclear families comprising 1,873 subjects from the Midwestern US. Association analyses were performed in the total sample, male and female subgroups, respectively. Significant associations with CSI were found with two SNPs (rs222029, P = 0.0019; rs222020, P = 0.0042) for the male subgroup. Haplotype-based association tests corroborated the single-SNP results. Our findings suggest that the DBP gene might be one of the genetic factors influencing CSI phenotype in Caucasians, especially in males.

  18. Projecting School Psychology Staffing Needs Using a Risk-Adjusted Model.

    ERIC Educational Resources Information Center

    Stellwagen, Kurt

    A model is proposed to project optimal school psychology service ratios based upon the percentages of at risk students enrolled within a given school population. Using the standard 1:1,000 service ratio advocated by The National Association of School Psychologists (NASP) as a starting point, ratios are then adjusted based upon the size of three…

  19. Evidence and gaps in the literature on orthorexia nervosa.

    PubMed

    Varga, Márta; Dukay-Szabó, Szilvia; Túry, Ferenc; van Furth, Eric F; van Furth Eric, F

    2013-06-01

    To review the literature on the prevalence, risk groups and risk factors of the alleged eating disorder orthorexia nervosa. We searched Medline and Pubmed using several key terms relating to orthorexia nervosa (ON) and checked the reference list of the articles that we found. Attention was given to methodological problems in these studies, such as the use of non-validated assessment instruments, small sample size and sample characteristics, which make generalization of the results impossible. Eleven studies were found. The average prevalence rate for orthorexia was 6.9 % for the general population and 35-57.8 % for high-risk groups (healthcare professionals, artists). Dieticians and other healthcare professionals are at high risk of ON. Risk factors include obsessive-compulsive features, eating-related disturbances and higher socioeconomic status. Relevant clinical experience, published literature and research data have increased in the last few years. The definition and diagnostic criteria of ON remain unclear. Further studies are needed to clarify appropriate diagnostic methods and the place of ON among psychopathological categories.

  20. Standardized mean differences cause funnel plot distortion in publication bias assessments.

    PubMed

    Zwetsloot, Peter-Paul; Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris Ah; Chamuleau, Steven Aj; MacLeod, Malcolm R; Wever, Kimberley E

    2017-09-08

    Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results.

Top