Sample records for true effect sizes

  1. A Meta Analytical Approach Regarding School Effectiveness: The True Size of School Effects and the Effect Size of Educational Leadership.

    ERIC Educational Resources Information Center

    Bosker, Roel J.; Witziers, Bob

    School-effectiveness research has not yet been able to identify the factors of effective and noneffective schools, the real contribution of the significant factors, the true sizes of school effects, and the generalizability of school-effectiveness results. This paper presents findings of a meta analysis, the Dutch PSO programme, that was used to…

  2. Statistical power analysis in wildlife research

    USGS Publications Warehouse

    Steidl, R.J.; Hayes, J.P.

    1997-01-01

    Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.

  3. Sample size requirements for indirect association studies of gene-environment interactions (G x E).

    PubMed

    Hein, Rebecca; Beckmann, Lars; Chang-Claude, Jenny

    2008-04-01

    Association studies accounting for gene-environment interactions (G x E) may be useful for detecting genetic effects. Although current technology enables very dense marker spacing in genetic association studies, the true disease variants may not be genotyped. Thus, causal genes are searched for by indirect association using genetic markers in linkage disequilibrium (LD) with the true disease variants. Sample sizes needed to detect G x E effects in indirect case-control association studies depend on the true genetic main effects, disease allele frequencies, whether marker and disease allele frequencies match, LD between loci, main effects and prevalence of environmental exposures, and the magnitude of interactions. We explored variables influencing sample sizes needed to detect G x E, compared these sample sizes with those required to detect genetic marginal effects, and provide an algorithm for power and sample size estimations. Required sample sizes may be heavily inflated if LD between marker and disease loci decreases. More than 10,000 case-control pairs may be required to detect G x E. However, given weak true genetic main effects, moderate prevalence of environmental exposures, as well as strong interactions, G x E effects may be detected with smaller sample sizes than those needed for the detection of genetic marginal effects. Moreover, in this scenario, rare disease variants may only be detectable when G x E is included in the analyses. Thus, the analysis of G x E appears to be an attractive option for the detection of weak genetic main effects of rare variants that may not be detectable in the analysis of genetic marginal effects only.

  4. Do adults show a curse of knowledge in false-belief reasoning? A robust estimate of the true effect size.

    PubMed

    Ryskin, Rachel A; Brown-Schmidt, Sarah

    2014-01-01

    Seven experiments use large sample sizes to robustly estimate the effect size of a previous finding that adults are more likely to commit egocentric errors in a false-belief task when the egocentric response is plausible in light of their prior knowledge. We estimate the true effect size to be less than half of that reported in the original findings. Even though we found effects in the same direction as the original, they were substantively smaller; the original study would have had less than 33% power to detect an effect of this magnitude. The influence of plausibility on the curse of knowledge in adults appears to be small enough that its impact on real-life perspective-taking may need to be reevaluated.

  5. QTL mapping of flowering time, fruit size and number in populations involving andromonoecious true lemon cucumber

    USDA-ARS?s Scientific Manuscript database

    Andromonoecious sex expression in cucumber is controlled by the m locus, which encodes the 1-aminocyclopropane-1 –carboxylic acid synthase (ACS) in the ethylene biosynthesis pathway. This gene seems to have pleotropic effects on fruit size and number, but the genetic basis is unknown. The True Lemon...

  6. Effect of limestone particle size and calcium to non-phytate phosphorus ratio on true ileal calcium digestibility of limestone for broiler chickens.

    PubMed

    Anwar, M N; Ravindran, V; Morel, P C H; Ravindran, G; Cowieson, A J

    2016-10-01

    The purpose of this study was to determine the effect of limestone particle size and calcium (Ca) to non-phytate phosphorus (P) ratio on the true ileal Ca digestibility of limestone for broiler chickens. A limestone sample was passed through a set of sieves and separated into fine (<0.5 mm) and coarse (1-2 mm) particles. The analysed Ca concentration of both particle sizes was similar (420 g/kg). Six experimental diets were developed using each particle size with Ca:non-phytate P ratios of 1.5:1, 2.0:1 and 2.5:1, with ratios being adjusted by manipulating the dietary Ca concentrations. A Ca-free diet was also developed to determine the basal ileal endogenous Ca losses. Titanium dioxide (3 g/kg) was incorporated in all diets as an indigestible marker. Each experimental diet was randomly allotted to 6 replicate cages (8 birds per cage) and fed from d 21 to 24 post hatch. Apparent ileal digestibility of Ca was calculated using the indicator method and corrected for basal endogenous losses to determine the true Ca digestibility. The basal ileal endogenous Ca losses were determined to be 127 mg/kg of dry matter intake. Increasing Ca:non-phytate P ratios reduced the true Ca digestibility of limestone. The true Ca digestibility coefficients of limestone with Ca:non-phytate P ratios of 1.5, 2.0 and 2.5 were 0.65, 0.57 and 0.49, respectively. Particle size of limestone had a marked effect on the Ca digestibility, with the digestibility being higher in coarse particles (0.71 vs. 0.43).

  7. Effects of video-game play on information processing: a meta-analytic investigation.

    PubMed

    Powers, Kasey L; Brooks, Patricia J; Aldrich, Naomi J; Palladino, Melissa A; Alfieri, Louis

    2013-12-01

    Do video games enhance cognitive functioning? We conducted two meta-analyses based on different research designs to investigate how video games impact information-processing skills (auditory processing, executive functions, motor skills, spatial imagery, and visual processing). Quasi-experimental studies (72 studies, 318 comparisons) compare habitual gamers with controls; true experiments (46 studies, 251 comparisons) use commercial video games in training. Using random-effects models, video games led to improved information processing in both the quasi-experimental studies, d = 0.61, 95% CI [0.50, 0.73], and the true experiments, d = 0.48, 95% CI [0.35, 0.60]. Whereas the quasi-experimental studies yielded small to large effect sizes across domains, the true experiments yielded negligible effects for executive functions, which contrasted with the small to medium effect sizes in other domains. The quasi-experimental studies appeared more susceptible to bias than were the true experiments, with larger effects being reported in higher-tier than in lower-tier journals, and larger effects reported by the most active research groups in comparison with other labs. The results are further discussed with respect to other moderators and limitations in the extant literature.

  8. Towards Cluster-Assembled Materials of True Monodispersity in Size and Chemical Environment: Synthesis, Dynamics and Activity

    DTIC Science & Technology

    2016-10-27

    AFRL-AFOSR-UK-TR-2016-0037 Towards cluster-assembled materials of true monodispersity in size and chemical environment: Synthesis, Dynamics and...Towards cluster-assembled materials of true monodispersity in size and chemical environment: synthesis, dynamics and activity 5a.  CONTRACT NUMBER 5b...report Towards cluster-assembled materials of true monodispersity in size and chemical environment: Synthesis, Dynamics and Activity Ulrich Heiz

  9. Effects of sources of variability on sample sizes required for RCTs, applied to trials of lipid-altering therapies on carotid artery intima-media thickness.

    PubMed

    Gould, A Lawrence; Koglin, Joerg; Bain, Raymond P; Pinto, Cathy-Anne; Mitchel, Yale B; Pasternak, Richard C; Sapre, Aditi

    2009-08-01

    Studies measuring progression of carotid artery intima-media thickness (cIMT) have been used to estimate the effect of lipid-modifying therapies cardiovascular event risk. The likelihood that future cIMT clinical trials will detect a true treatment effect is estimated by leveraging results from prior studies. The present analyses assess the impact of between- and within-study variability based on currently published data from prior clinical studies on the likelihood that ongoing or future cIMT trials will detect the true treatment effect of lipid-modifying therapies. Published data from six contemporary cIMT studies (ASAP, ARBITER 2, RADIANCE 1, RADIANCE 2, ENHANCE, and METEOR) including data from a total of 3563 patients were examined. Bayesian and frequentist methods were used to assess the impact of between study variability on the likelihood of detecting true treatment effects on 1-year cIMT progression/regression and to provide a sample size estimate that would specifically compensate for the effect of between-study variability. In addition to the well-described within-study variability, there is considerable between-study variability associated with the measurement of annualized change in cIMT. Accounting for the additional between-study variability decreases the power for existing study designs. In order to account for the added between-study variability, it is likely that future cIMT studies would require a large increase in sample size in order to provide substantial probability (> or =90%) to have 90% power of detecting a true treatment effect.Limitation Analyses are based on study level data. Future meta-analyses incorporating patient-level data would be useful for confirmation. Due to substantial within- and between-study variability in the measure of 1-year change of cIMT, as well as uncertainty about progression rates in contemporary populations, future study designs evaluating the effect of new lipid-modifying therapies on atherosclerotic disease progression are likely to be challenged by large sample sizes in order to demonstrate a true treatment effect.

  10. Size scale effect in cavitation erosion

    NASA Technical Reports Server (NTRS)

    Rao, P. V.; Rao, B. C.; Buckley, D. H.

    1982-01-01

    An overview and data analyses pertaining to cavitation erosion size scale effects are presented. The exponents n in the power law relationship are found to vary from 1.7 to 4.9 for venturi and rotating disk devices supporting the values reported in the literature. Suggestions for future studies were made to arrive at further true scale effects.

  11. The Effect of Defects on the Fatigue Initiation Process in Two P/M Superalloys.

    DTIC Science & Technology

    1980-09-01

    determine the effect of Sdfect size, shape, and population on the fatigue initiation process in two high strength P/M superalloys, AF-l5 and AF2-lDA. The...to systematically determine the effects of defect size, shape, and population on fatigue. It is true that certain trends have been established...to determine the relative effects of defect size, shape, and population on the crack initiation life of a representative engineering material

  12. Sampling Theory and Confidence Intervals for Effect Sizes: Using ESCI To Illustrate "Bouncing"; Confidence Intervals.

    ERIC Educational Resources Information Center

    Du, Yunfei

    This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…

  13. The Importance of Teaching Power in Statistical Hypothesis Testing

    ERIC Educational Resources Information Center

    Olinsky, Alan; Schumacher, Phyllis; Quinn, John

    2012-01-01

    In this paper, we discuss the importance of teaching power considerations in statistical hypothesis testing. Statistical power analysis determines the ability of a study to detect a meaningful effect size, where the effect size is the difference between the hypothesized value of the population parameter under the null hypothesis and the true value…

  14. Human Fear Chemosignaling: Evidence from a Meta-Analysis.

    PubMed

    de Groot, Jasper H B; Smeets, Monique A M

    2017-10-01

    Alarm pheromones are widely used in the animal kingdom. Notably, there are 26 published studies (N = 1652) highlighting a human capacity to communicate fear, stress, and anxiety via body odor from one person (66% males) to another (69% females). The question is whether the findings of this literature reflect a true effect, and what the average effect size is. These questions were answered by combining traditional meta-analysis with novel meta-analytical tools, p-curve analysis and p-uniform-techniques that could indicate whether findings are likely to reflect a true effect based on the distribution of P-values. A traditional random-effects meta-analysis yielded a small-to-moderate effect size (Hedges' g: 0.36, 95% CI: 0.31-0.41), p-curve analysis showed evidence diagnostic of a true effect (ps < 0.0001), and there was no evidence for publication bias. This meta-analysis did not assess the internal validity of the current studies; yet, the combined results illustrate the statistical robustness of a field in human olfaction dealing with the human capacity to communicate certain emotions (fear, stress, anxiety) via body odor. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Aggregate and Individual Replication Probability within an Explicit Model of the Research Process

    ERIC Educational Resources Information Center

    Miller, Jeff; Schwarz, Wolf

    2011-01-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…

  16. Minimizing the Maximum Expected Sample Size in Two-Stage Phase II Clinical Trials with Continuous Outcomes

    PubMed Central

    Wason, James M. S.; Mander, Adrian P.

    2012-01-01

    Two-stage designs are commonly used for Phase II trials. Optimal two-stage designs have the lowest expected sample size for a specific treatment effect, for example, the null value, but can perform poorly if the true treatment effect differs. Here we introduce a design for continuous treatment responses that minimizes the maximum expected sample size across all possible treatment effects. The proposed design performs well for a wider range of treatment effects and so is useful for Phase II trials. We compare the design to a previously used optimal design and show it has superior expected sample size properties. PMID:22651118

  17. Methods for obtaining true particle size distributions from cross section measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lord, Kristina Alyse

    2013-01-01

    Sectioning methods are frequently used to measure grain sizes in materials. These methods do not provide accurate grain sizes for two reasons. First, the sizes of features observed on random sections are always smaller than the true sizes of solid spherical shaped objects, as noted by Wicksell [1]. This is the case because the section very rarely passes through the center of solid spherical shaped objects randomly dispersed throughout a material. The sizes of features observed on random sections are inversely related to the distance of the center of the solid object from the section [1]. Second, on a planemore » section through the solid material, larger sized features are more frequently observed than smaller ones due to the larger probability for a section to come into contact with the larger sized portion of the spheres than the smaller sized portion. As a result, it is necessary to find a method that takes into account these reasons for inaccurate particle size measurements, while providing a correction factor for accurately determining true particle size measurements. I present a method for deducing true grain size distributions from those determined from specimen cross sections, either by measurement of equivalent grain diameters or linear intercepts.« less

  18. Understanding the Role of P Values and Hypothesis Tests in Clinical Research.

    PubMed

    Mark, Daniel B; Lee, Kerry L; Harrell, Frank E

    2016-12-01

    P values and hypothesis testing methods are frequently misused in clinical research. Much of this misuse appears to be owing to the widespread, mistaken belief that they provide simple, reliable, and objective triage tools for separating the true and important from the untrue or unimportant. The primary focus in interpreting therapeutic clinical research data should be on the treatment ("oomph") effect, a metaphorical force that moves patients given an effective treatment to a different clinical state relative to their control counterparts. This effect is assessed using 2 complementary types of statistical measures calculated from the data, namely, effect magnitude or size and precision of the effect size. In a randomized trial, effect size is often summarized using constructs, such as odds ratios, hazard ratios, relative risks, or adverse event rate differences. How large a treatment effect has to be to be consequential is a matter for clinical judgment. The precision of the effect size (conceptually related to the amount of spread in the data) is usually addressed with confidence intervals. P values (significance tests) were first proposed as an informal heuristic to help assess how "unexpected" the observed effect size was if the true state of nature was no effect or no difference. Hypothesis testing was a modification of the significance test approach that envisioned controlling the false-positive rate of study results over many (hypothetical) repetitions of the experiment of interest. Both can be helpful but, by themselves, provide only a tunnel vision perspective on study results that ignores the clinical effects the study was conducted to measure.

  19. Measuring true Young's modulus of a cantilevered nanowire: effect of clamping on resonance frequency.

    PubMed

    Qin, Qingquan; Xu, Feng; Cao, Yongqing; Ro, Paul I; Zhu, Yong

    2012-08-20

    The effect of clamping on resonance frequency and thus measured Young's modulus of nanowires (NWs) is systematically investigated via a combined experimental and simulation approach. ZnO NWs are used in this work as an example. The resonance tests are performed in situ inside a scanning electron microscope and the NWs are cantilevered on a tungsten probe by electron-beam-induced deposition (EBID) of hydrocarbon. EBID is repeated several times to deposit more hydrocarbons at the same location. The resonance frequency increases with the increasing clamp size until approaching that under the "fixed" boundary condition. The critical clamp size is identified as a function of NW diameter and NW Young's modulus. This work: 1) exemplifies the importance of considering the effect of clamping in measurements of Young's modulus using the resonance method, and 2) demonstrates that the true Young's modulus can be measured if the critical clamp size is reached. Design guidelines on the critical clamp size are provided. Such design guidelines can be extended to other one-dimensional nanostructures such as carbon nanotubes. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Influence of casein as a percentage of true protein and protein level on color and texture of milks containing 1 and 2% fat.

    PubMed

    Misawa, Noriko; Barbano, David M; Drake, MaryAnne

    2016-07-01

    Combinations of fresh liquid microfiltration retentate of skim milk, ultrafiltered retentate and permeate produced from microfiltration permeate, cream, and dried lactose monohydrate were used to produce a matrix of 20 milks. The milks contained 5 levels of casein as a percentage of true protein of about 5, 25, 50, 75, and 80% and 4 levels of true protein of 3.0, 3.76, 4.34, and 5.0% with constant lactose percentage of 5%. The experiment was replicated twice and repeated for both 1 and 2% fat content. Hunter color measurements, relative viscosity, and fat globule size distribution were measured, and a trained panel documented appearance and texture attributes on all milks. Overall, casein as a percentage of true protein had stronger effects than level of true protein on Hunter L, a, b values, relative viscosity, and fat globule size when using fresh liquid micellar casein concentrates and milk serum protein concentrates produced by a combination of microfiltration and ultrafiltration. As casein as a percentage of true protein increased, the milks became more white (higher L value), less green (lower negative a value), and less yellow (lower b value). Relative viscosity increased and d(0.9) generally decreased with increasing casein as a percentage of true protein. Panelists perceived milks with increasing casein as a percentage of true protein as more white, more opaque, and less yellow. Panelists were able to detect increased throat cling and mouthcoating with increased casein as a percentage of true protein in 2% milks, even when differences in appearance among milks were masked. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  1. The Quantity-Quality Trade-Off of Children in a Developing Country: Identification Using Chinese Twins

    PubMed Central

    LI, HONGBIN; ZHANG, JUNSEN; ZHU, YI

    2008-01-01

    Testing the trade-off between child quantity and quality within a family is complicated by the endogeneity of family size. Using data from the Chinese Population Census, we examine the effect of family size on child educational attainment in China. We find a negative correlation between family size and child outcome, even after we control for the birth order effect. We then instrument family size by the exogenous variation that is induced by a twin birth and find a negative effect of family size on children’s education. We also find that the effect of family size is more evident in rural China, where the public education system is poor. Given that our estimates of the effect of having twins on nontwins at least provide the lower bound of the true effect of family size, these findings suggest a quantity-quality trade-off for children in developing countries. PMID:18390301

  2. The quantity-quality trade-off of children in a developing country: identification using Chinese twins.

    PubMed

    Li, Hongbin; Zhang, Junsen; Zhu, Yi

    2008-02-01

    Testing the trade-off between child quantity and quality within a family is complicated by the endogeneity of family size. Using data from the Chinese Population Census, we examine the effect of family size on child educational attainment in China. We find a negative correlation between family size and child outcome, even after we control for the birth order effect. We then instrument family size by the exogenous variation that is induced by a twin birth and find a negative effect offamily size on children's education. We also find that the effect of family size is more evident in rural China, where the public education system is poor. Given that our estimates of the effect of having twins on nontwins at least provide the lower bound of the true effect of family size, these findings suggest a quantity-quality trade-off for children in developing countries.

  3. On the Post Hoc Power in Testing Mean Differences

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Maxwell, Scott

    2005-01-01

    Retrospective or post hoc power analysis is recommended by reviewers and editors of many journals. Little literature has been found that gave a serious study of the post hoc power. When the sample size is large, the observed effect size is a good estimator of the true power. This article studies whether such a power estimator provides valuable…

  4. Transethnic differences in GWAS signals: A simulation study.

    PubMed

    Zanetti, Daniela; Weale, Michael E

    2018-05-07

    Genome-wide association studies (GWASs) have allowed researchers to identify thousands of single nucleotide polymorphisms (SNPs) and other variants associated with particular complex traits. Previous studies have reported differences in the strength and even the direction of GWAS signals across different populations. These differences could be due to a combination of (1) lack of power, (2) allele frequency differences, (3) linkage disequilibrium (LD) differences, and (4) true differences in causal variant effect sizes. To determine whether properties (1)-(3) on their own might be sufficient to explain the patterns previously noted in strong GWAS signals, we simulated case-control data of European, Asian and African ancestry, applying realistic allele frequencies and LD from 1000 Genomes data but enforcing equal causal effect sizes across populations. Much of the observed differences in strong GWAS signals could indeed be accounted for by allele frequency and LD differences, enhanced by the Euro-centric SNP bias and lower SNP coverage found in older GWAS panels. While we cannot rule out a role for true transethnic effect size differences, our results suggest that strong causal effects may be largely shared among human populations, motivating the use of transethnic data for fine-mapping. © 2018 John Wiley & Sons Ltd/University College London.

  5. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  6. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  7. A Note on Cluster Effects in Latent Class Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Keller, Bryan

    2011-01-01

    This article examines the effects of clustering in latent class analysis. A comprehensive simulation study is conducted, which begins by specifying a true multilevel latent class model with varying within- and between-cluster sample sizes, varying latent class proportions, and varying intraclass correlations. These models are then estimated under…

  8. Bayesian evaluation of effect size after replicating an original study

    PubMed Central

    van Aert, Robbie C. M.; van Assen, Marcel A. L. M.

    2017-01-01

    The vast majority of published results in the literature is statistically significant, which raises concerns about their reliability. The Reproducibility Project Psychology (RPP) and Experimental Economics Replication Project (EE-RP) both replicated a large number of published studies in psychology and economics. The original study and replication were statistically significant in 36.1% in RPP and 68.8% in EE-RP suggesting many null effects among the replicated studies. However, evidence in favor of the null hypothesis cannot be examined with null hypothesis significance testing. We developed a Bayesian meta-analysis method called snapshot hybrid that is easy to use and understand and quantifies the amount of evidence in favor of a zero, small, medium and large effect. The method computes posterior model probabilities for a zero, small, medium, and large effect and adjusts for publication bias by taking into account that the original study is statistically significant. We first analytically approximate the methods performance, and demonstrate the necessity to control for the original study’s significance to enable the accumulation of evidence for a true zero effect. Then we applied the method to the data of RPP and EE-RP, showing that the underlying effect sizes of the included studies in EE-RP are generally larger than in RPP, but that the sample sizes of especially the included studies in RPP are often too small to draw definite conclusions about the true effect size. We also illustrate how snapshot hybrid can be used to determine the required sample size of the replication akin to power analysis in null hypothesis significance testing and present an easy to use web application (https://rvanaert.shinyapps.io/snapshot/) and R code for applying the method. PMID:28388646

  9. On the Importance of Cycle Minimum in Sunspot Cycle Prediction

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.

    1996-01-01

    The characteristics of the minima between sunspot cycles are found to provide important information for predicting the amplitude and timing of the following cycle. For example, the time of the occurrence of sunspot minimum sets the length of the previous cycle, which is correlated by the amplitude-period effect to the amplitude of the next cycle, with cycles of shorter (longer) than average length usually being followed by cycles of larger (smaller) than average size (true for 16 of 21 sunspot cycles). Likewise, the size of the minimum at cycle onset is correlated with the size of the cycle's maximum amplitude, with cycles of larger (smaller) than average size minima usually being associated with larger (smaller) than average size maxima (true for 16 of 22 sunspot cycles). Also, it was found that the size of the previous cycle's minimum and maximum relates to the size of the following cycle's minimum and maximum with an even-odd cycle number dependency. The latter effect suggests that cycle 23 will have a minimum and maximum amplitude probably larger than average in size (in particular, minimum smoothed sunspot number Rm = 12.3 +/- 7.5 and maximum smoothed sunspot number RM = 198.8 +/- 36.5, at the 95-percent level of confidence), further suggesting (by the Waldmeier effect) that it will have a faster than average rise to maximum (fast-rising cycles have ascent durations of about 41 +/- 7 months). Thus, if, as expected, onset for cycle 23 will be December 1996 +/- 3 months, based on smoothed sunspot number, then the length of cycle 22 will be about 123 +/- 3 months, inferring that it is a short-period cycle and that cycle 23 maximum amplitude probably will be larger than average in size (from the amplitude-period effect), having an RM of about 133 +/- 39 (based on the usual +/- 30 percent spread that has been seen between observed and predicted values), with maximum amplitude occurrence likely sometime between July 1999 and October 2000.

  10. A Meta-Analysis of Writing Instruction for Students in the Elementary Grades

    ERIC Educational Resources Information Center

    Graham, Steve; McKeown, Debra; Kiuhara, Sharlene; Harris, Karen R.

    2012-01-01

    In an effort to identify effective instructional practices for teaching writing to elementary grade students, we conducted a meta-analysis of the writing intervention literature, focusing our efforts on true and quasi-experiments. We located 115 documents that included the statistics for computing an effect size (ES). We calculated an average…

  11. Catching ghosts with a coarse net: use and abuse of spatial sampling data in detecting synchronization

    PubMed Central

    2017-01-01

    Synchronization of population dynamics in different habitats is a frequently observed phenomenon. A common mathematical tool to reveal synchronization is the (cross)correlation coefficient between time courses of values of the population size of a given species where the population size is evaluated from spatial sampling data. The corresponding sampling net or grid is often coarse, i.e. it does not resolve all details of the spatial configuration, and the evaluation error—i.e. the difference between the true value of the population size and its estimated value—can be considerable. We show that this estimation error can make the value of the correlation coefficient very inaccurate or even irrelevant. We consider several population models to show that the value of the correlation coefficient calculated on a coarse sampling grid rarely exceeds 0.5, even if the true value is close to 1, so that the synchronization is effectively lost. We also observe ‘ghost synchronization’ when the correlation coefficient calculated on a coarse sampling grid is close to 1 but in reality the dynamics are not correlated. Finally, we suggest a simple test to check the sampling grid coarseness and hence to distinguish between the true and artifactual values of the correlation coefficient. PMID:28202589

  12. The Effects of Population Size Histories on Estimates of Selection Coefficients from Time-Series Genetic Data

    PubMed Central

    Jewett, Ethan M.; Steinrücken, Matthias; Song, Yun S.

    2016-01-01

    Many approaches have been developed for inferring selection coefficients from time series data while accounting for genetic drift. These approaches have been motivated by the intuition that properly accounting for the population size history can significantly improve estimates of selective strengths. However, the improvement in inference accuracy that can be attained by modeling drift has not been characterized. Here, by comparing maximum likelihood estimates of selection coefficients that account for the true population size history with estimates that ignore drift by assuming allele frequencies evolve deterministically in a population of infinite size, we address the following questions: how much can modeling the population size history improve estimates of selection coefficients? How much can mis-inferred population sizes hurt inferences of selection coefficients? We conduct our analysis under the discrete Wright–Fisher model by deriving the exact probability of an allele frequency trajectory in a population of time-varying size and we replicate our results under the diffusion model. For both models, we find that ignoring drift leads to estimates of selection coefficients that are nearly as accurate as estimates that account for the true population history, even when population sizes are small and drift is high. This result is of interest because inference methods that ignore drift are widely used in evolutionary studies and can be many orders of magnitude faster than methods that account for population sizes. PMID:27550904

  13. A novel measure of effect size for mediation analysis.

    PubMed

    Lachowicz, Mark J; Preacher, Kristopher J; Kelley, Ken

    2018-06-01

    Mediation analysis has become one of the most popular statistical methods in the social sciences. However, many currently available effect size measures for mediation have limitations that restrict their use to specific mediation models. In this article, we develop a measure of effect size that addresses these limitations. We show how modification of a currently existing effect size measure results in a novel effect size measure with many desirable properties. We also derive an expression for the bias of the sample estimator for the proposed effect size measure and propose an adjusted version of the estimator. We present a Monte Carlo simulation study conducted to examine the finite sampling properties of the adjusted and unadjusted estimators, which shows that the adjusted estimator is effective at recovering the true value it estimates. Finally, we demonstrate the use of the effect size measure with an empirical example. We provide freely available software so that researchers can immediately implement the methods we discuss. Our developments here extend the existing literature on effect sizes and mediation by developing a potentially useful method of communicating the magnitude of mediation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. Experimental determination of the effect of detector size on profile measurements in narrow photon beams.

    PubMed

    Pappas, E; Maris, T G; Papadakis, A; Zacharopoulou, F; Damilakis, J; Papanikolaou, N; Gourtsoyiannis, N

    2006-10-01

    The aim of this work is to investigate experimentally the detector size effect on narrow beam profile measurements. Polymer gel and magnetic resonance imaging dosimetry was used for this purpose. Profile measurements (Pm(s)) of a 5 mm diameter 6 MV stereotactic beam were performed using polymer gels. Eight measurements of the profile of this narrow beam were performed using correspondingly eight different detector sizes. This was achieved using high spatial resolution (0.25 mm) two-dimensional measurements and eight different signal integration volumes A X A X slice thickness, simulating detectors of different size. "A" ranged from 0.25 to 7.5 mm, representing the detector size. The gel-derived profiles exhibited increased penumbra width with increasing detector size, for sizes >0.5 mm. By extrapolating the gel-derived profiles to zero detector size, the true profile (Pt) of the studied beam was derived. The same polymer gel data were also used to simulate a small-volume ion chamber profile measurement of the same beam, in terms of volume averaging. The comparison between these results and actual corresponding small-volume chamber profile measurements performed in this study, reveal that the penumbra broadening caused by both volume averaging and electron transport alterations (present in actual ion chamber profile measurements) is a lot more intense than that resulted by volume averaging effects alone (present in gel-derived profiles simulating ion chamber profile measurements). Therefore, not only the detector size, but also its composition and tissue equivalency is proved to be an important factor for correct narrow beam profile measurements. Additionally, the convolution kernels related to each detector size and to the air ion chamber were calculated using the corresponding profile measurements (Pm(s)), the gel-derived true profile (Pt), and convolution theory. The response kernels of any desired detector can be derived, allowing the elimination of the errors associated with narrow beam profile measurements.

  15. Hippocampal size is related to short-term true and false memory, and right fusiform size is related to long-term true and false memory.

    PubMed

    Zhu, Bi; Chen, Chuansheng; Loftus, Elizabeth F; He, Qinghua; Lei, Xuemei; Dong, Qi; Lin, Chongde

    2016-11-01

    There is a keen interest in identifying specific brain regions that are related to individual differences in true and false memories. Previous functional neuroimaging studies showed that activities in the hippocampus, right fusiform gyrus, and parahippocampal gyrus were associated with true and false memories, but no study thus far has examined whether the structures of these brain regions are associated with short-term and long-term true and false memories. To address that question, the current study analyzed data from 205 healthy young adults, who had valid data from both structural brain imaging and a misinformation task. In the misinformation task, subjects saw the crime scenarios, received misinformation, and took memory tests about the crimes an hour later and again after 1.5 years. Results showed that bilateral hippocampal volume was associated with short-term true and false memories, whereas right fusiform gyrus volume and surface area were associated with long-term true and false memories. This study provides the first evidence for the structural neural bases of individual differences in short-term and long-term true and false memories.

  16. Design, Implementation and Evaluation of an Operating System for a Network of Transputers.

    DTIC Science & Technology

    1987-06-01

    WHILE TRUE -- listen to linki SEQ receiving the header BYTE.SLICE.INPUT (linkl,headerl,1,header.size) -- decoding the block size block.sizelLO] z...I’m done BYTE.SLICE.OUTPUT (screen[0] ,header0,3,1) WHILE TRUE -- listen to linki SEQ- rec eiving the header BYTE.SLICE. IPUT (linkl,headerl,1

  17. Microstructure and critical strain of dynamic recrystallization of 6082 aluminum alloy in thermal deformation

    NASA Astrophysics Data System (ADS)

    Ren, W. W.; Xu, C. G.; Chen, X. L.; Qin, S. X.

    2018-05-01

    Using high temperature compression experiments, true stress true strain curve of 6082 aluminium alloy were obtained at the temperature 460°C-560°C and the strain rate 0.01 s-1-10 s-1. The effects of deformation temperature and strain rate on the microstructure are investigated; (‑∂lnθ/∂ε) ‑ ε curves are plotted based on σ-ε curve. Critical strains of dynamic recrystallization of 6082 aluminium alloy model were obtained. The results showed lower strain rates were beneficial to increase the volume fraction of recrystallization, the average recrystallized grain size was coarse; High strain rates are beneficial to refine average grain size, the volume fraction of dynamic recrystallized grain is less than that by using low strain rates. High temperature reduced the dislocation density and provided less driving force for recrystallization so that coarse grains remained. Dynamic recrystallization critical strain model and thermal experiment results can effectively predict recrystallization critical point of 6082 aluminium alloy during thermal deformation.

  18. The Area Coverage of Geophysical Fields as a Function of Sensor Field-of View

    NASA Technical Reports Server (NTRS)

    Key, Jeffrey R.

    1994-01-01

    In many remote sensing studies of geophysical fields such as clouds, land cover, or sea ice characteristics, the fractional area coverage of the field in an image is estimated as the proportion of pixels that have the characteristic of interest (i.e., are part of the field) as determined by some thresholding operation. The effect of sensor field-of-view on this estimate is examined by modeling the unknown distribution of subpixel area fraction with the beta distribution, whose two parameters depend upon the true fractional area coverage, the pixel size, and the spatial structure of the geophysical field. Since it is often not possible to relate digital number, reflectance, or temperature to subpixel area fraction, the statistical models described are used to determine the effect of pixel size and thresholding operations on the estimate of area fraction for hypothetical geophysical fields. Examples are given for simulated cumuliform clouds and linear openings in sea ice, whose spatial structures are described by an exponential autocovariance function. It is shown that the rate and direction of change in total area fraction with changing pixel size depends on the true area fraction, the spatial structure, and the thresholding operation used.

  19. Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature.

    PubMed

    Szucs, Denes; Ioannidis, John P A

    2017-03-01

    We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64-1.46) for nominally statistically significant results and D = 0.24 (0.11-0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience.

  20. The Effects of Population Size Histories on Estimates of Selection Coefficients from Time-Series Genetic Data.

    PubMed

    Jewett, Ethan M; Steinrücken, Matthias; Song, Yun S

    2016-11-01

    Many approaches have been developed for inferring selection coefficients from time series data while accounting for genetic drift. These approaches have been motivated by the intuition that properly accounting for the population size history can significantly improve estimates of selective strengths. However, the improvement in inference accuracy that can be attained by modeling drift has not been characterized. Here, by comparing maximum likelihood estimates of selection coefficients that account for the true population size history with estimates that ignore drift by assuming allele frequencies evolve deterministically in a population of infinite size, we address the following questions: how much can modeling the population size history improve estimates of selection coefficients? How much can mis-inferred population sizes hurt inferences of selection coefficients? We conduct our analysis under the discrete Wright-Fisher model by deriving the exact probability of an allele frequency trajectory in a population of time-varying size and we replicate our results under the diffusion model. For both models, we find that ignoring drift leads to estimates of selection coefficients that are nearly as accurate as estimates that account for the true population history, even when population sizes are small and drift is high. This result is of interest because inference methods that ignore drift are widely used in evolutionary studies and can be many orders of magnitude faster than methods that account for population sizes. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  1. Statistical controversies in clinical research: building the bridge to phase II-efficacy estimation in dose-expansion cohorts.

    PubMed

    Boonstra, P S; Braun, T M; Taylor, J M G; Kidwell, K M; Bellile, E L; Daignault, S; Zhao, L; Griffith, K A; Lawrence, T S; Kalemkerian, G P; Schipper, M J

    2017-07-01

    Regulatory agencies and others have expressed concern about the uncritical use of dose expansion cohorts (DECs) in phase I oncology trials. Nonetheless, by several metrics-prevalence, size, and number-their popularity is increasing. Although early efficacy estimation in defined populations is a common primary endpoint of DECs, the types of designs best equipped to identify efficacy signals have not been established. We conducted a simulation study of six phase I design templates with multiple DECs: three dose-assignment/adjustment mechanisms multiplied by two analytic approaches for estimating efficacy after the trial is complete. We also investigated the effect of sample size and interim futility analysis on trial performance. Identifying populations in which the treatment is efficacious (true positives) and weeding out inefficacious treatment/populations (true negatives) are competing goals in these trials. Thus, we estimated true and false positive rates for each design. Adaptively updating the MTD during the DEC improved true positive rates by 8-43% compared with fixing the dose during the DEC phase while maintaining false positive rates. Inclusion of an interim futility analysis decreased the number of patients treated under inefficacious DECs without hurting performance. A substantial gain in efficiency is obtainable using a design template that statistically models toxicity and efficacy against dose level during expansion. Design choices for dose expansion should be motivated by and based upon expected performance. Similar to the common practice in single-arm phase II trials, cohort sample sizes should be justified with respect to their primary aim and include interim analyses to allow for early stopping. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  2. Effect of various digital processing algorithms on the measurement accuracy of endodontic file length.

    PubMed

    Kal, Betül Ilhan; Baksi, B Güniz; Dündar, Nesrin; Sen, Bilge Hakan

    2007-02-01

    The aim of this study was to compare the accuracy of endodontic file lengths after application of various image enhancement modalities. Endodontic files of three different ISO sizes were inserted in 20 single-rooted extracted permanent mandibular premolar teeth and standardized images were obtained. Original digital images were then enhanced using five processing algorithms. Six evaluators measured the length of each file on each image. The measurements from each processing algorithm and each file size were compared using repeated measures ANOVA and Bonferroni tests (P = 0.05). Paired t test was performed to compare the measurements with the true lengths of the files (P = 0.05). All of the processing algorithms provided significantly shorter measurements than the true length of each file size (P < 0.05). The threshold enhancement modality produced significantly higher mean error values (P < 0.05), while there was no significant difference among the other enhancement modalities (P > 0.05). Decrease in mean error value was observed with increasing file size (P < 0.05). Invert, contrast/brightness and edge enhancement algorithms may be recommended for accurate file length measurements when utilizing storage phosphor plates.

  3. Local extinction and recolonization, species effective population size, and modern human origins.

    PubMed

    Eller, Elise; Hawks, John; Relethford, John H

    2004-10-01

    A primary objection from a population genetics perspective to a multiregional model of modern human origins is that the model posits a large census size, whereas genetic data suggest a small effective population size. The relationship between census size and effective size is complex, but arguments based on an island model of migration show that if the effective population size reflects the number of breeding individuals and the effects of population subdivision, then an effective population size of 10,000 is inconsistent with the census size of 500,000 to 1,000,000 that has been suggested by archeological evidence. However, these models have ignored the effects of population extinction and recolonization, which increase the expected variance among demes and reduce the inbreeding effective population size. Using models developed for population extinction and recolonization, we show that a large census size consistent with the multiregional model can be reconciled with an effective population size of 10,000, but genetic variation among demes must be high, reflecting low interdeme migration rates and a colonization process that involves a small number of colonists or kin-structured colonization. Ethnographic and archeological evidence is insufficient to determine whether such demographic conditions existed among Pleistocene human populations, and further work needs to be done. More realistic models that incorporate isolation by distance and heterogeneity in extinction rates and effective deme sizes also need to be developed. However, if true, a process of population extinction and recolonization has interesting implications for human demographic history.

  4. Aerodynamic design and optimization of high altitude environment simulation system based on CFD

    NASA Astrophysics Data System (ADS)

    Ma, Pingchang; Yan, Lutao; Li, Hong

    2017-05-01

    High altitude environment simulation system (HAES) is built to provide a true flight environment for subsonic vehicles, with low density, high speed, and short time characteristics. Normally, wind tunnel experiments are based on similar principal, such as parameters of Re or Ma, in order to shorten test product size. However, the test products in HAES are trim size, so more attention is put on the true flight environment simulation. It includes real flight environment pressure, destiny and real flight velocity, and its type velocity is Ma=0.8. In this paper, the aerodynamic design of HAES is introduced and its rationality is explained according to CFD calculation based on Fluent. Besides, the initial pressure of vacuum tank in HAES is optimized, which is not only to meet the economic requirements, but also to decrease the effect of additional stress on the test product in the process of the establishment of the target flow field.

  5. Does an uneven sample size distribution across settings matter in cross-classified multilevel modeling? Results of a simulation study.

    PubMed

    Milliren, Carly E; Evans, Clare R; Richmond, Tracy K; Dunn, Erin C

    2018-06-06

    Recent advances in multilevel modeling allow for modeling non-hierarchical levels (e.g., youth in non-nested schools and neighborhoods) using cross-classified multilevel models (CCMM). Current practice is to cluster samples from one context (e.g., schools) and utilize the observations however they are distributed from the second context (e.g., neighborhoods). However, it is unknown whether an uneven distribution of sample size across these contexts leads to incorrect estimates of random effects in CCMMs. Using the school and neighborhood data structure in Add Health, we examined the effect of neighborhood sample size imbalance on the estimation of variance parameters in models predicting BMI. We differentially assigned students from a given school to neighborhoods within that school's catchment area using three scenarios of (im)balance. 1000 random datasets were simulated for each of five combinations of school- and neighborhood-level variance and imbalance scenarios, for a total of 15,000 simulated data sets. For each simulation, we calculated 95% CIs for the variance parameters to determine whether the true simulated variance fell within the interval. Across all simulations, the "true" school and neighborhood variance parameters were estimated 93-96% of the time. Only 5% of models failed to capture neighborhood variance; 6% failed to capture school variance. These results suggest that there is no systematic bias in the ability of CCMM to capture the true variance parameters regardless of the distribution of students across neighborhoods. Ongoing efforts to use CCMM are warranted and can proceed without concern for the sample imbalance across contexts. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Font size matters--emotion and attention in cortical responses to written words.

    PubMed

    Bayer, Mareike; Sommer, Werner; Schacht, Annekathrin

    2012-01-01

    For emotional pictures with fear-, disgust-, or sex-related contents, stimulus size has been shown to increase emotion effects in attention-related event-related potentials (ERPs), presumably reflecting the enhanced biological impact of larger emotion-inducing pictures. If this is true, size should not enhance emotion effects for written words with symbolic and acquired meaning. Here, we investigated ERP effects of font size for emotional and neutral words. While P1 and N1 amplitudes were not affected by emotion, the early posterior negativity started earlier and lasted longer for large relative to small words. These results suggest that emotion-driven facilitation of attention is not necessarily based on biological relevance, but might generalize to stimuli with arbitrary perceptual features. This finding points to the high relevance of written language in today's society as an important source of emotional meaning.

  7. Accurate decisions in an uncertain world: collective cognition increases true positives while decreasing false positives.

    PubMed

    Wolf, Max; Kurvers, Ralf H J M; Ward, Ashley J W; Krause, Stefan; Krause, Jens

    2013-04-07

    In a wide range of contexts, including predator avoidance, medical decision-making and security screening, decision accuracy is fundamentally constrained by the trade-off between true and false positives. Increased true positives are possible only at the cost of increased false positives; conversely, decreased false positives are associated with decreased true positives. We use an integrated theoretical and experimental approach to show that a group of decision-makers can overcome this basic limitation. Using a mathematical model, we show that a simple quorum decision rule enables individuals in groups to simultaneously increase true positives and decrease false positives. The results from a predator-detection experiment that we performed with humans are in line with these predictions: (i) after observing the choices of the other group members, individuals both increase true positives and decrease false positives, (ii) this effect gets stronger as group size increases, (iii) individuals use a quorum threshold set between the average true- and false-positive rates of the other group members, and (iv) individuals adjust their quorum adaptively to the performance of the group. Our results have broad implications for our understanding of the ecology and evolution of group-living animals and lend themselves for applications in the human domain such as the design of improved screening methods in medical, forensic, security and business applications.

  8. Accurate decisions in an uncertain world: collective cognition increases true positives while decreasing false positives

    PubMed Central

    Wolf, Max; Kurvers, Ralf H. J. M.; Ward, Ashley J. W.; Krause, Stefan; Krause, Jens

    2013-01-01

    In a wide range of contexts, including predator avoidance, medical decision-making and security screening, decision accuracy is fundamentally constrained by the trade-off between true and false positives. Increased true positives are possible only at the cost of increased false positives; conversely, decreased false positives are associated with decreased true positives. We use an integrated theoretical and experimental approach to show that a group of decision-makers can overcome this basic limitation. Using a mathematical model, we show that a simple quorum decision rule enables individuals in groups to simultaneously increase true positives and decrease false positives. The results from a predator-detection experiment that we performed with humans are in line with these predictions: (i) after observing the choices of the other group members, individuals both increase true positives and decrease false positives, (ii) this effect gets stronger as group size increases, (iii) individuals use a quorum threshold set between the average true- and false-positive rates of the other group members, and (iv) individuals adjust their quorum adaptively to the performance of the group. Our results have broad implications for our understanding of the ecology and evolution of group-living animals and lend themselves for applications in the human domain such as the design of improved screening methods in medical, forensic, security and business applications. PMID:23407830

  9. Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature

    PubMed Central

    Szucs, Denes; Ioannidis, John P. A.

    2017-01-01

    We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64–1.46) for nominally statistically significant results and D = 0.24 (0.11–0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience. PMID:28253258

  10. Screening for postnatal depression in Chinese-speaking women using the Hong Kong translated version of the Edinburgh Postnatal Depression Scale.

    PubMed

    Chen, Helen; Bautista, Dianne; Ch'ng, Ying Chia; Li, Wenyun; Chan, Edwin; Rush, A John

    2013-06-01

    The Edinburgh Postnatal Depression Scale (EPDS) may not be a uniformly valid postnatal depression (PND) screen across populations. We evaluated the performance of a Chinese translation of 10-item (HK-EPDS) and six-item (HK-EPDS-6) versions in post-partum women in Singapore. Chinese-speaking post-partum obstetric clinic patients were recruited for this study. They completed the HK-EPDS, from which we derived the six-item HK-EPDS-6. All women were clinically assessed for PND based on Diagnostic and Statistical Manual, Fourth Edition-Text Revision criteria. Receiver-operator curve (ROC) analyses and likelihood ratio computations informed scale cutoff choices. Clinical fitness was judged by thresholds for internal consistency [α ≥ 0.70] and for diagnostic performance by true-positive rate (>85%), false-positive rate (≤10%), positive likelihood ratio (>1), negative likelihood ratio (<0.2), area under the ROC curve (AUC, ≥90%) and effect size (≥0.80). Based on clinical interview, prevalence of PND was 6.2% in 487 post-partum women. HK-EPDS internal consistency was 0.84. At 13 or more cutoff, the true-positive rate was 86.7%, false-positive rate 3.3%, positive likelihood ratio 26.4, negative likelihood ratio 0.14, AUC 94.4% and effect size 0.81. For the HK-EPDS-6, internal consistency was 0.76. At 8 or more cutoff, we found a true-positive rate of 86.7%, false-positive rate 6.6%, positive likelihood ratio 13.2, negative likelihood ration 0.14, AUC 92.9% and effect size 0.98. The HK-EPDS (cutoff ≥13) and HK-EPDS6 (cutoff ≥8) are fit for PND screening for general population post-partum women. The brief six-item version appears to be clinically suitable for quick screening in Chinese speaking women. Copyright © 2013 Wiley Publishing Asia Pty Ltd.

  11. The ranking probability approach and its usage in design and analysis of large-scale studies.

    PubMed

    Kuo, Chia-Ling; Zaykin, Dmitri

    2013-01-01

    In experiments with many statistical tests there is need to balance type I and type II error rates while taking multiplicity into account. In the traditional approach, the nominal [Formula: see text]-level such as 0.05 is adjusted by the number of tests, [Formula: see text], i.e., as 0.05/[Formula: see text]. Assuming that some proportion of tests represent "true signals", that is, originate from a scenario where the null hypothesis is false, power depends on the number of true signals and the respective distribution of effect sizes. One way to define power is for it to be the probability of making at least one correct rejection at the assumed [Formula: see text]-level. We advocate an alternative way of establishing how "well-powered" a study is. In our approach, useful for studies with multiple tests, the ranking probability [Formula: see text] is controlled, defined as the probability of making at least [Formula: see text] correct rejections while rejecting hypotheses with [Formula: see text] smallest P-values. The two approaches are statistically related. Probability that the smallest P-value is a true signal (i.e., [Formula: see text]) is equal to the power at the level [Formula: see text], to an very good excellent approximation. Ranking probabilities are also related to the false discovery rate and to the Bayesian posterior probability of the null hypothesis. We study properties of our approach when the effect size distribution is replaced for convenience by a single "typical" value taken to be the mean of the underlying distribution. We conclude that its performance is often satisfactory under this simplification; however, substantial imprecision is to be expected when [Formula: see text] is very large and [Formula: see text] is small. Precision is largely restored when three values with the respective abundances are used instead of a single typical effect size value.

  12. SU-E-T-624: Portal Dosimetry Commissioning of Multiple (6) Varian TrueBeam Linacs Equipped with PortalVision DMI MV Imager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weldon, M; DiCostanzo, D; Grzetic, S

    2015-06-15

    Purpose: To show that a single model for Portal Domisetry (PD) can be established for beam-matched TrueBeam™ linacs that are equipped with the DMI imager (43×43cm effective area). Methods: Our department acquired 6 new TrueBeam™s, 4 “Slim” and 2 “Edge” models. The Slims were equipped with 6 and 10MV photons, and the Edges with 6MV. MLCs differed between the Slims and Edges (Millennium 120 vs HD-MLC respectively). PD model was created from data acquired using a single linac (Slim). This includes maximum field size profile, as well as output factors and acquired measured fluence using the DMI imager. All identicalmore » linacs were beam-matched, profiles were within 1% at maximum field size at a variety of depths. The profile correction file was generated from 40×40 profile acquired at 5cm depth, 95cm SSD, and was adjusted for deviation at the field edges and corners. The PD model and profile correction was applied to all six TrueBeam™s and imagers. A variety of jaw only and sliding window (SW) MLC test fields, as well as TG-119 and clinical SW and VMAT plans were run on each linac to validate the model. Results: For 6X and 10X, field by field comparison using 3mm/3% absolute gamma criteria passed 90% or better for all cases. This was also true for composite comparisons of TG-199 and clinical plans, matching our current department criteria. Conclusion: Using a single model per photon energy for PD for the TrueBeam™ equipped with a DMI imager can produce clinically acceptable results across multiple identical and matched linacs. It is also possible to use the same PD model despite different MLCs. This can save time during commissioning and software updates.« less

  13. Treatment of true precocious puberty with a potent luteinizing hormone-releasing factor agonist: effect on growth, sexual maturation, pelvic sonography, and the hypothalamic-pituitary-gonadal axis.

    PubMed

    Styne, D M; Harris, D A; Egli, C A; Conte, F A; Kaplan, S L; Rivier, J; Vale, W; Grumbach, M M

    1985-07-01

    We used the LHRH agonist D-Trp6-Pro6-N-ethylamide LHRH (LHRH-A) to treat 19 children (12 girls and 7 boys) with true precocious puberty. Fourteen patients had idiopathic true precocious puberty, 4 had a hamartoma of the tuber cinereum, and 1 had a hypothalamic astrocytoma. Basal gonadotropin secretion and responses to native LHRH decreased within 1 week of initiation LHRH-A therapy, and sex steroid secretion decreased within 2 weeks to or within the prepubertal range. Ultrasonographic evaluation of the uterus indicated a postmenarchal size and shape in all 11 girls studied before treatment, which reverted to prepubertal size and configuration in 5 girls during LHRH-A therapy. The enlarged ovaries decreased in size and the multiple ovarian follicular cysts regressed. Sexual characteristics ceased advancing or reverted toward the prepubertal state in all patients receiving therapy for 6-36 months. All 5 girls with menarche before therapy had no further menses. Three girls had hot flashes after LHRH-A-induced reduction of the plasma estradiol concentration. Height velocity, SDs above the mean height velocity for age, and SDs above the mean height for age decreased during LHRH-A therapy; the velocity of skeletal maturation decreased after 12 months of LHRH-A therapy and was sustained during continued therapy over 18-36 months. In 4 patients, a subnormal growth rate (less than 4.5 cm/yr) occurred during LHRH-A therapy. Six patients had cutaneous reactions of LHRH-A, but no demonstrable circulating antibodies to LHRH-A. In 2 patients in whom LHRH-A therapy was discontinued because of skin reactions, precocious sexual maturation resumed at the previous rate for the ensuing 6-12 months; subsequently, they were desensitized to LHRH-A, and during a second course of therapy, their secondary sexual development and sex steroid levels again quickly decreased. LHRH-A proved an effective and safe treatment for true precocious puberty in boys as well as girls with central precocious puberty whether of the idiopathic type or secondary to a hamartoma of the tuber cinereum or a hypothalamic neoplasm.

  14. Simultaneous sequential monitoring of efficacy and safety led to masking of effects.

    PubMed

    van Eekelen, Rik; de Hoop, Esther; van der Tweel, Ingeborg

    2016-08-01

    Usually, sequential designs for clinical trials are applied on the primary (=efficacy) outcome. In practice, other outcomes (e.g., safety) will also be monitored and influence the decision whether to stop a trial early. Implications of simultaneous monitoring on trial decision making are yet unclear. This study examines what happens to the type I error, power, and required sample sizes when one efficacy outcome and one correlated safety outcome are monitored simultaneously using sequential designs. We conducted a simulation study in the framework of a two-arm parallel clinical trial. Interim analyses on two outcomes were performed independently and simultaneously on the same data sets using four sequential monitoring designs, including O'Brien-Fleming and Triangular Test boundaries. Simulations differed in values for correlations and true effect sizes. When an effect was present in both outcomes, competition was introduced, which decreased power (e.g., from 80% to 60%). Futility boundaries for the efficacy outcome reduced overall type I errors as well as power for the safety outcome. Monitoring two correlated outcomes, given that both are essential for early trial termination, leads to masking of true effects. Careful consideration of scenarios must be taken into account when designing sequential trials. Simulation results can help guide trial design. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Influence of diet pellet hardness and particle size on food utilization by mice, rats and hamsters.

    PubMed

    Ford, D J

    1977-10-01

    Increasing hardness of diet pellets reduced food wastage by each species. Also, less wastage occurred when pellets made from finely ground materials were given, an effect that was not related to hardness. The hardest diet reduced growth of the mice by reducing true food consumption and a poorer food conversion efficiency (true food consumption/growth) was obtained. Apparent food consumption increased with the softness of the diet and food utilization (apparent food consumption/growth) of the softest diets was less efficient than those of the others. Grinding of the raw materials prior to pelleting had no effect on food conversion, but food utilization was less efficient because of the greater wastage of pellets from coarsely ground materials and consequent apparent food comsumption.

  16. SU-F-E-19: A Novel Method for TrueBeam Jaw Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corns, R; Zhao, Y; Huang, V

    2016-06-15

    Purpose: A simple jaw calibration method is proposed for Varian TrueBeam using an EPID-Encoder combination that gives accurate fields sizes and a homogeneous junction dose. This benefits clinical applications such as mono-isocentric half-beam block breast cancer or head and neck cancer treatment with junction/field matching. Methods: We use EPID imager with pixel size 0.392 mm × 0.392 mm to determine the radiation jaw position as measured from radio-opaque markers aligned with the crosshair. We acquire two images with different symmetric field sizes and record each individual jaw encoder values. A linear relationship between each jaw’s position and its encoder valuemore » is established, from which we predict the encoder values that produce the jaw positions required by TrueBeam’s calibration procedure. During TrueBeam’s jaw calibration procedure, we move the jaw with the pendant to set the jaw into position using the predicted encoder value. The overall accuracy is under 0.1 mm. Results: Our in-house software analyses images and provides sub-pixel accuracy to determine field centre and radiation edges (50% dose of the profile). We verified the TrueBeam encoder provides a reliable linear relationship for each individual jaw position (R{sup 2}>0.9999) from which the encoder values necessary to set jaw calibration points (1 cm and 19 cm) are predicted. Junction matching dose inhomogeneities were improved from >±20% to <±6% using this new calibration protocol. However, one technical challenge exists for junction matching, if the collimator walkout is large. Conclusion: Our new TrueBeam jaw calibration method can systematically calibrate the jaws to crosshair within sub-pixel accuracy and provides both good junction doses and field sizes. This method does not compensate for a larger collimator walkout, but can be used as the underlying foundation for addressing the walkout issue.« less

  17. Characteristics of Specialists' Adaptation to the Conditions of a Society in Transformation

    ERIC Educational Resources Information Center

    Nizamova, A. E.

    2012-01-01

    The declining size of the Russian population means that labor resources need to be used more effectively, which is especially true of highly skilled workers. An analysis of data from the Russian Longitudinal Monitoring Survey (RLMS) shows that the social and workplace situations of specialists need to be improved significantly. (Contains 5 tables…

  18. Why Most Published Research Findings Are False

    PubMed Central

    Ioannidis, John P. A.

    2005-01-01

    Summary There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research. PMID:16060722

  19. Another Look at the Demand for Higher Education: Measuring the Price Sensitivity of the Decision to Apply to College.

    ERIC Educational Resources Information Center

    Savoca, Elizabeth

    1990-01-01

    Using data from National Longitudinal Survey of the High School Class of 1972, this paper presents estimates of the price elasticity of the decision to apply to college. Calculations incorporating this price effect into earlier enrollment elasticity estimates suggest that true elasticity may be double the size reported in the literature. Includes…

  20. The Heuristic Value of p in Inductive Statistical Inference

    PubMed Central

    Krueger, Joachim I.; Heck, Patrick R.

    2017-01-01

    Many statistical methods yield the probability of the observed data – or data more extreme – under the assumption that a particular hypothesis is true. This probability is commonly known as ‘the’ p-value. (Null Hypothesis) Significance Testing ([NH]ST) is the most prominent of these methods. The p-value has been subjected to much speculation, analysis, and criticism. We explore how well the p-value predicts what researchers presumably seek: the probability of the hypothesis being true given the evidence, and the probability of reproducing significant results. We also explore the effect of sample size on inferential accuracy, bias, and error. In a series of simulation experiments, we find that the p-value performs quite well as a heuristic cue in inductive inference, although there are identifiable limits to its usefulness. We conclude that despite its general usefulness, the p-value cannot bear the full burden of inductive inference; it is but one of several heuristic cues available to the data analyst. Depending on the inferential challenge at hand, investigators may supplement their reports with effect size estimates, Bayes factors, or other suitable statistics, to communicate what they think the data say. PMID:28649206

  1. The Heuristic Value of p in Inductive Statistical Inference.

    PubMed

    Krueger, Joachim I; Heck, Patrick R

    2017-01-01

    Many statistical methods yield the probability of the observed data - or data more extreme - under the assumption that a particular hypothesis is true. This probability is commonly known as 'the' p -value. (Null Hypothesis) Significance Testing ([NH]ST) is the most prominent of these methods. The p -value has been subjected to much speculation, analysis, and criticism. We explore how well the p -value predicts what researchers presumably seek: the probability of the hypothesis being true given the evidence, and the probability of reproducing significant results. We also explore the effect of sample size on inferential accuracy, bias, and error. In a series of simulation experiments, we find that the p -value performs quite well as a heuristic cue in inductive inference, although there are identifiable limits to its usefulness. We conclude that despite its general usefulness, the p -value cannot bear the full burden of inductive inference; it is but one of several heuristic cues available to the data analyst. Depending on the inferential challenge at hand, investigators may supplement their reports with effect size estimates, Bayes factors, or other suitable statistics, to communicate what they think the data say.

  2. Sizing gaseous emboli using Doppler embolic signal intensity.

    PubMed

    Banahan, Caroline; Hague, James P; Evans, David H; Patel, Rizwan; Ramnarine, Kumar V; Chung, Emma M L

    2012-05-01

    Extension of transcranial Doppler embolus detection to estimation of bubble size has historically been hindered by difficulties in applying scattering theory to the interpretation of clinical data. This article presents a simplified approach to the sizing of air emboli based on analysis of Doppler embolic signal intensity, by using an approximation to the full scattering theory that can be solved to estimate embolus size. Tests using simulated emboli show that our algorithm is theoretically capable of sizing 90% of "emboli" to within 10% of their true radius. In vitro tests show that 69% of emboli can be sized to within 20% of their true value under ideal conditions, which reduces to 30% of emboli if the beam and vessel are severely misaligned. Our results demonstrate that estimation of bubble size during clinical monitoring could be used to distinguish benign microbubbles from potentially harmful macrobubbles during intraoperative clinical monitoring. Copyright © 2012 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  3. The Effects of Size and Type of Vocal Fold Polyp on Some Acoustic Voice Parameters.

    PubMed

    Akbari, Elaheh; Seifpanahi, Sadegh; Ghorbani, Ali; Izadi, Farzad; Torabinezhad, Farhad

    2018-03-01

    Vocal abuse and misuse would result in vocal fold polyp. Certain features define the extent of vocal folds polyp effects on voice acoustic parameters. The present study aimed to define the effects of polyp size on acoustic voice parameters, and compare these parameters in hemorrhagic and non-hemorrhagic polyps. In the present retrospective study, 28 individuals with hemorrhagic or non-hemorrhagic polyps of the true vocal folds were recruited to investigate acoustic voice parameters of vowel/ æ/ computed by the Praat software. The data were analyzed using the SPSS software, version 17.0. According to the type and size of polyps, mean acoustic differences and correlations were analyzed by the statistical t test and Pearson correlation test, respectively; with significance level below 0.05. The results indicated that jitter and the harmonics-to-noise ratio had a significant positive and negative correlation with the polyp size (P=0.01), respectively. In addition, both mentioned parameters were significantly different between the two types of the investigated polyps. Both the type and size of polyps have effects on acoustic voice characteristics. In the present study, a novel method to measure polyp size was introduced. Further confirmation of this method as a tool to compare polyp sizes requires additional investigations.

  4. The Effects of Size and Type of Vocal Fold Polyp on Some Acoustic Voice Parameters

    PubMed Central

    Akbari, Elaheh; Seifpanahi, Sadegh; Ghorbani, Ali; Izadi, Farzad; Torabinezhad, Farhad

    2018-01-01

    Background Vocal abuse and misuse would result in vocal fold polyp. Certain features define the extent of vocal folds polyp effects on voice acoustic parameters. The present study aimed to define the effects of polyp size on acoustic voice parameters, and compare these parameters in hemorrhagic and non-hemorrhagic polyps. Methods In the present retrospective study, 28 individuals with hemorrhagic or non-hemorrhagic polyps of the true vocal folds were recruited to investigate acoustic voice parameters of vowel/ æ/ computed by the Praat software. The data were analyzed using the SPSS software, version 17.0. According to the type and size of polyps, mean acoustic differences and correlations were analyzed by the statistical t test and Pearson correlation test, respectively; with significance level below 0.05. Results The results indicated that jitter and the harmonics-to-noise ratio had a significant positive and negative correlation with the polyp size (P=0.01), respectively. In addition, both mentioned parameters were significantly different between the two types of the investigated polyps. Conclusion Both the type and size of polyps have effects on acoustic voice characteristics. In the present study, a novel method to measure polyp size was introduced. Further confirmation of this method as a tool to compare polyp sizes requires additional investigations. PMID:29749984

  5. EXTENDING MULTIVARIATE DISTANCE MATRIX REGRESSION WITH AN EFFECT SIZE MEASURE AND THE ASYMPTOTIC NULL DISTRIBUTION OF THE TEST STATISTIC

    PubMed Central

    McArtor, Daniel B.; Lubke, Gitta H.; Bergeman, C. S.

    2017-01-01

    Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains. PMID:27738957

  6. Extending multivariate distance matrix regression with an effect size measure and the asymptotic null distribution of the test statistic.

    PubMed

    McArtor, Daniel B; Lubke, Gitta H; Bergeman, C S

    2017-12-01

    Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains.

  7. Influence of Co-57 and CT Transmission Measurements on the Quantification Accuracy and Partial Volume Effect of a Small Animal PET Scanner.

    PubMed

    Mannheim, Julia G; Schmid, Andreas M; Pichler, Bernd J

    2017-12-01

    Non-invasive in vivo positron emission tomography (PET) provides high detection sensitivity in the nano- to picomolar range and in addition to other advantages, the possibility to absolutely quantify the acquired data. The present study focuses on the comparison of transmission data acquired with an X-ray computed tomography (CT) scanner or a Co-57 source for the Inveon small animal PET scanner (Siemens Healthcare, Knoxville, TN, USA), as well as determines their influences on the quantification accuracy and partial volume effect (PVE). A special focus included the impact of the performed calibration on the quantification accuracy. Phantom measurements were carried out to determine the quantification accuracy, the influence of the object size on the quantification, and the PVE for different sphere sizes, along the field of view and for different contrast ratios. An influence of the emission activity on the Co-57 transmission measurements was discovered (deviations up to 24.06 % measured to true activity), whereas no influence of the emission activity on the CT attenuation correction was identified (deviations <3 % for measured to true activity). The quantification accuracy was substantially influenced by the applied calibration factor and by the object size. The PVE demonstrated a dependency on the sphere size, the position within the field of view, the reconstruction and correction algorithms and the count statistics. Depending on the reconstruction algorithm, only ∼30-40 % of the true activity within a small sphere could be resolved. The iterative 3D reconstruction algorithms uncovered substantially increased recovery values compared to the analytical and 2D iterative reconstruction algorithms (up to 70.46 % and 80.82 % recovery for the smallest and largest sphere using iterative 3D reconstruction algorithms). The transmission measurement (CT or Co-57 source) to correct for attenuation did not severely influence the PVE. The analysis of the quantification accuracy and the PVE revealed an influence of the object size, the reconstruction algorithm and the applied corrections. Particularly, the influence of the emission activity during the transmission measurement performed with a Co-57 source must be considered. To receive comparable results, also among different scanner configurations, standardization of the acquisition (imaging parameters, as well as applied reconstruction and correction protocols) is necessary.

  8. Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.

    PubMed

    Obuchowski, Nancy A; Bullen, Jennifer

    2017-01-01

    Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.

  9. Distinguishing between statistical significance and practical/clinical meaningfulness using statistical inference.

    PubMed

    Wilkinson, Michael

    2014-03-01

    Decisions about support for predictions of theories in light of data are made using statistical inference. The dominant approach in sport and exercise science is the Neyman-Pearson (N-P) significance-testing approach. When applied correctly it provides a reliable procedure for making dichotomous decisions for accepting or rejecting zero-effect null hypotheses with known and controlled long-run error rates. Type I and type II error rates must be specified in advance and the latter controlled by conducting an a priori sample size calculation. The N-P approach does not provide the probability of hypotheses or indicate the strength of support for hypotheses in light of data, yet many scientists believe it does. Outcomes of analyses allow conclusions only about the existence of non-zero effects, and provide no information about the likely size of true effects or their practical/clinical value. Bayesian inference can show how much support data provide for different hypotheses, and how personal convictions should be altered in light of data, but the approach is complicated by formulating probability distributions about prior subjective estimates of population effects. A pragmatic solution is magnitude-based inference, which allows scientists to estimate the true magnitude of population effects and how likely they are to exceed an effect magnitude of practical/clinical importance, thereby integrating elements of subjective Bayesian-style thinking. While this approach is gaining acceptance, progress might be hastened if scientists appreciate the shortcomings of traditional N-P null hypothesis significance testing.

  10. An effect size filter improves the reproducibility in spectral counting-based comparative proteomics.

    PubMed

    Gregori, Josep; Villarreal, Laura; Sánchez, Alex; Baselga, José; Villanueva, Josep

    2013-12-16

    The microarray community has shown that the low reproducibility observed in gene expression-based biomarker discovery studies is partially due to relying solely on p-values to get the lists of differentially expressed genes. Their conclusions recommended complementing the p-value cutoff with the use of effect-size criteria. The aim of this work was to evaluate the influence of such an effect-size filter on spectral counting-based comparative proteomic analysis. The results proved that the filter increased the number of true positives and decreased the number of false positives and the false discovery rate of the dataset. These results were confirmed by simulation experiments where the effect size filter was used to evaluate systematically variable fractions of differentially expressed proteins. Our results suggest that relaxing the p-value cut-off followed by a post-test filter based on effect size and signal level thresholds can increase the reproducibility of statistical results obtained in comparative proteomic analysis. Based on our work, we recommend using a filter consisting of a minimum absolute log2 fold change of 0.8 and a minimum signal of 2-4 SpC on the most abundant condition for the general practice of comparative proteomics. The implementation of feature filtering approaches could improve proteomic biomarker discovery initiatives by increasing the reproducibility of the results obtained among independent laboratories and MS platforms. Quality control analysis of microarray-based gene expression studies pointed out that the low reproducibility observed in the lists of differentially expressed genes could be partially attributed to the fact that these lists are generated relying solely on p-values. Our study has established that the implementation of an effect size post-test filter improves the statistical results of spectral count-based quantitative proteomics. The results proved that the filter increased the number of true positives whereas decreased the false positives and the false discovery rate of the datasets. The results presented here prove that a post-test filter applying a reasonable effect size and signal level thresholds helps to increase the reproducibility of statistical results in comparative proteomic analysis. Furthermore, the implementation of feature filtering approaches could improve proteomic biomarker discovery initiatives by increasing the reproducibility of results obtained among independent laboratories and MS platforms. This article is part of a Special Issue entitled: Standardization and Quality Control in Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. An effective rate equation approach to reaction kinetics in small volumes: theory and application to biochemical reactions in nonequilibrium steady-state conditions.

    PubMed

    Grima, R

    2010-07-21

    Chemical master equations provide a mathematical description of stochastic reaction kinetics in well-mixed conditions. They are a valid description over length scales that are larger than the reactive mean free path and thus describe kinetics in compartments of mesoscopic and macroscopic dimensions. The trajectories of the stochastic chemical processes described by the master equation can be ensemble-averaged to obtain the average number density of chemical species, i.e., the true concentration, at any spatial scale of interest. For macroscopic volumes, the true concentration is very well approximated by the solution of the corresponding deterministic and macroscopic rate equations, i.e., the macroscopic concentration. However, this equivalence breaks down for mesoscopic volumes. These deviations are particularly significant for open systems and cannot be calculated via the Fokker-Planck or linear-noise approximations of the master equation. We utilize the system-size expansion including terms of the order of Omega(-1/2) to derive a set of differential equations whose solution approximates the true concentration as given by the master equation. These equations are valid in any open or closed chemical reaction network and at both the mesoscopic and macroscopic scales. In the limit of large volumes, the effective mesoscopic rate equations become precisely equal to the conventional macroscopic rate equations. We compare the three formalisms of effective mesoscopic rate equations, conventional rate equations, and chemical master equations by applying them to several biochemical reaction systems (homodimeric and heterodimeric protein-protein interactions, series of sequential enzyme reactions, and positive feedback loops) in nonequilibrium steady-state conditions. In all cases, we find that the effective mesoscopic rate equations can predict very well the true concentration of a chemical species. This provides a useful method by which one can quickly determine the regions of parameter space in which there are maximum differences between the solutions of the master equation and the corresponding rate equations. We show that these differences depend sensitively on the Fano factors and on the inherent structure and topology of the chemical network. The theory of effective mesoscopic rate equations generalizes the conventional rate equations of physical chemistry to describe kinetics in systems of mesoscopic size such as biological cells.

  12. Clinical performance of the TRUE2go blood glucose system--a novel integrated system for meter and strips.

    PubMed

    Kipnes, Mark S; Joseph, Hal; Morris, Harry; Manko, Jason; Bell, Douglas E

    2009-10-01

    The complications of diabetes may be minimized by adequate glycemic control, which is aided by self-monitoring of blood glucose (SMBG) levels. A new SMBG system, TRUE2go (Home Diagnostics, Inc., Fort Lauderdale, FL), does not require calibration of test strips, thereby eliminating the potential source of error in blood glucose determination associated with mis-calibration. This study tested the performance of the TRUE2go system. The very small size and attachment of the meter to a vial of test strips make the TRUE2go system unique. The studies were carried out with adult patients with type 1 or 2 diabetes, using procedures for testing accuracy as specified in International Organization for Standardization (ISO) 15197:2003. The evaluation included patients' compliance with the TRUE2go system's written instructions, ease of understanding the supplied instructions, and ease of use of the system. The study demonstrated the accuracy and precision of the TRUE2go system, with 100% of glucose test results falling within ISO-recommended limits for glucose concentrations ranging from 24 mg/dL to 549 mg/dL. There was agreement between data obtained with TRUE2go when used by healthcare professionals and by lay users on capillary blood from both fingertip and a forearm sticks. Lay users' understanding of and compliance with TRUE2go system instructions were excellent, as was their satisfaction with the system. The TRUE2go system is accurate and convenient to use, and its instructions are easily understood by lay users. TRUE2go features that contribute to convenience, and therefore could improve compliance with monitoring regimens, include its small size, attachment to the vial of strips, easy-to-read display, automatic calibration for test strips, and suitability for fingertip as well as forearm testing.

  13. Performance of the Automated Self-Administered 24-hour Recall relative to a measure of true intakes and to an interviewer-administered 24-h recall.

    PubMed

    Kirkpatrick, Sharon I; Subar, Amy F; Douglass, Deirdre; Zimmerman, Thea P; Thompson, Frances E; Kahle, Lisa L; George, Stephanie M; Dodd, Kevin W; Potischman, Nancy

    2014-07-01

    The Automated Self-Administered 24-hour Recall (ASA24), a freely available Web-based tool, was developed to enhance the feasibility of collecting high-quality dietary intake data from large samples. The purpose of this study was to assess the criterion validity of ASA24 through a feeding study in which the true intake for 3 meals was known. True intake and plate waste from 3 meals were ascertained for 81 adults by inconspicuously weighing foods and beverages offered at a buffet before and after each participant served him- or herself. Participants were randomly assigned to complete an ASA24 or an interviewer-administered Automated Multiple-Pass Method (AMPM) recall the following day. With the use of linear and Poisson regression analysis, we examined the associations between recall mode and 1) the proportions of items consumed for which a match was reported and that were excluded, 2) the number of intrusions (items reported but not consumed), and 3) differences between energy, nutrient, food group, and portion size estimates based on true and reported intakes. Respondents completing ASA24 reported 80% of items truly consumed compared with 83% in AMPM (P = 0.07). For both ASA24 and AMPM, additions to or ingredients in multicomponent foods and drinks were more frequently omitted than were main foods or drinks. The number of intrusions was higher in ASA24 (P < 0.01). Little evidence of differences by recall mode was found in the gap between true and reported energy, nutrient, and food group intakes or portion sizes. Although the interviewer-administered AMPM performed somewhat better relative to true intakes for matches, exclusions, and intrusions, ASA24 performed well. Given the substantial cost savings that ASA24 offers, it has the potential to make important contributions to research aimed at describing the diets of populations, assessing the effect of interventions on diet, and elucidating diet and health relations. This trial was registered at clinicaltrials.gov as NCT00978406. © 2014 American Society for Nutrition.

  14. 36 CFR § 1004.11 - Load, weight and size limits.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... designate more restrictive limits when appropriate for traffic safety or protection of the road surface. The... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Load, weight and size limits... TRAFFIC SAFETY § 1004.11 Load, weight and size limits. (a) Vehicle load, weight and size limits...

  15. Beyond statistical inference: A decision theory for science

    PubMed Central

    KILLEEN, PETER R.

    2008-01-01

    Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests—which place all value on the replicability of an effect and none on its magnitude—as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute. PMID:17201351

  16. Beyond statistical inference: a decision theory for science.

    PubMed

    Killeen, Peter R

    2006-08-01

    Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests--which place all value on the replicability of an effect and none on its magnitude--as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute.

  17. Understanding Crystal Populations; Looking Towards 3D Quantitative Analysis

    NASA Astrophysics Data System (ADS)

    Jerram, D. A.; Morgan, D. J.

    2010-12-01

    In order to understand volcanic systems, the potential record held within crystal populations needs to be revealed. It is becoming increasingly clear, however, that the crystal populations that arrive at the surface in volcanic eruptions are commonly mixtures of crystals, which may be representative of simple crystallization, recycling of crystals and incorporation of alien crystals. If we can quantify the true 3D population within a sample then we will be able to separate crystals with different histories and begin to interrogate the true and complex plumbing within the volcanic system. Modeling crystal populations is one area where we can investigate the best methodologies to use when dealing with sections through 3D populations. By producing known 3D shapes and sizes with virtual textures and looking at the statistics of shape and size when such populations are sectioned, we are able to gain confidence about what our 2D information is telling us about the population. We can also use this approach to test the size of population we need to analyze. 3D imaging through serial sectioning or x-ray CT, provides a complete 3D quantification of a rocks texture. Individual phases can be identified and in principle the true 3D statistics of the population can be interrogated. In practice we need to develop strategies (as with 2D-3D transformations), that enable a true characterization of the 3D data, and an understanding of the errors and pitfalls that exist. Ultimately, the reproduction of true 3D textures and the wealth of information they hold, is now within our reach.

  18. MIS - The Human Connection

    PubMed Central

    Bush, Ian E.

    1980-01-01

    The lessons of the 70's with MIS were largely painful, often the same as those of the 60's, and were found in different phases on two continents. On examination this turns out to be true for many non-medical fields, true for systems programming, and thus a very general phenomenon. It is related to the functional complexity rather than to the sheer size of the software required, and above all to the relative neglect of human factors at all levels of software and hardware design. Simple hierarchical theory is a useful tool for analyzing complex systems and restoring the necessary dominance of common sense human factors. An example shows the very large effects of neglecting these factors on costs and benefits of MIS and their sub-systems.

  19. Correlation of microstructure, tensile properties and hole expansion ratio in cold rolled advanced high strength steels

    NASA Astrophysics Data System (ADS)

    Terrazas, Oscar R.

    The demand for advanced high strength steels (AHSS) with higher strengths is increasing in the automotive industry. While there have been major improvements recently in the trade-off between ductility and strength, sheared-edge formability of AHSS remains a critical issue. AHSS sheets exhibit cracking during stamping and forming operations below the predictions of forming limits. It has become important to understand the correlation between microstructure and sheared edge formability. The present work investigates the effects of shearing conditions, microstructure, and tensile properties on sheared edge formability. Seven commercially produced steels with tensile strengths of 1000 +/- 100 MPa were evaluated: five dual-phase (DP) steels with different compositions and varying microstructural features, one trip aided bainitic ferrite (TBF) steel, and one press-hardened steel tempered to a tensile strength within the desired range. It was found that sheared edge formability is influenced by the martensite in DP steels. Quantitative stereology measurements provided results that showed martensite size and distribution affect hole expansion ratio (HER). The overall trend is that HER increases with more evenly dispersed martensite throughout the microstructure. This microstructure involves a combination of martensite size, contiguity, mean free distance, and number of colonies per unit area. Additionally, shear face characterization showed that the fracture and burr region affect HER. The HER decreases with increasing size of fracture and burr region. With a larger fracture and burr region more defects and/or micro-cracks will be present on the shear surface. This larger fracture region on the shear face facilitates cracking in sheared edge formability. Finally, the sheared edge formability is directly correlated to true fracture strain (TFS). The true fracture strain from tensile samples correlates to the HER values. HER increases with increasing true fracture strain.

  20. Packet-Based Protocol Efficiency for Aeronautical and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Carek, David A.

    2005-01-01

    This paper examines the relation between bit error ratios and the effective link efficiency when transporting data with a packet-based protocol. Relations are developed to quantify the impact of a protocol s packet size and header size relative to the bit error ratio of the underlying link. These relations are examined in the context of radio transmissions that exhibit variable error conditions, such as those used in satellite, aeronautical, and other wireless networks. A comparison of two packet sizing methodologies is presented. From these relations, the true ability of a link to deliver user data, or information, is determined. Relations are developed to calculate the optimal protocol packet size forgiven link error characteristics. These relations could be useful in future research for developing an adaptive protocol layer. They can also be used for sizing protocols in the design of static links, where bit error ratios have small variability.

  1. Characterization of 176Lu background in LSO-based PET scanners

    NASA Astrophysics Data System (ADS)

    Conti, Maurizio; Eriksson, Lars; Rothfuss, Harold; Sjoeholm, Therese; Townsend, David; Rosenqvist, Göran; Carlier, Thomas

    2017-05-01

    LSO and LYSO are today the most common scintillators used in positron emission tomography. Lutetium contains traces of 176Lu, a radioactive isotope that decays β - with a cascade of γ photons in coincidence. Therefore, Lutetium-based scintillators are characterized by a small natural radiation background. In this paper, we investigate and characterize the 176Lu radiation background via experiments performed on LSO-based PET scanners. LSO background was measured at different energy windows and different time coincidence windows, and by using shields to alter the original spectrum. The effect of radiation background in particularly count-starved applications, such as 90Y imaging, is analysed and discussed. Depending on the size of the PET scanner, between 500 and 1000 total random counts per second and between 3 and 5 total true coincidences per second were measured in standard coincidence mode. The LSO background counts in a Siemens mCT in the standard PET energy and time windows are in general negligible in terms of trues, and are comparable to that measured in a BGO scanner of similar size.

  2. Strain Gradient Solution for the Eshelby-Type Polyhedral Inclusion Problem

    DTIC Science & Technology

    2012-01-01

    2011 Available online 6 November 2011 Keywords: Eshelby tensor Polyhedral inclusion Size effect Eigenstrain Strain gradient a b s t r a c t The Eshelby...material containing an ellipsoidal inclusion prescribed with a uniform eigenstrain is a milestone in micromechanics. The solution for the dynamic Eshelby...strain to the prescribed uniform eigenstrain , is constant inside the inclusion. However, this property is true only for ellipsoidal inclusions (and when

  3. Estimating the soil moisture profile by assimilating near-surface observations with the ensemble Kalman filter (EnKF)

    NASA Astrophysics Data System (ADS)

    Zhang, Shuwen; Li, Haorui; Zhang, Weidong; Qiu, Chongjian; Li, Xin

    2005-11-01

    The paper investigates the ability to retrieve the true soil moisture profile by assimilating near-surface soil moisture into a soil moisture model with an ensemble Kaiman filter (EnKF) assimilation scheme, including the effect of ensemble size, update interval and nonlinearities in the profile retrieval, the required time for full retrieval of the soil moisture profiles, and the possible influence of the depth of the soil moisture observation. These questions are addressed by a desktop study using synthetic data. The “true” soil moisture profiles are generated from the soil moisture model under the boundary condition of 0.5 cm d-1 evaporation. To test the assimilation schemes, the model is initialized with a poor initial guess of the soil moisture profile, and different ensemble sizes are tested showing that an ensemble of 40 members is enough to represent the covariance of the model forecasts. Also compared are the results with those from the direct insertion assimilation scheme, showing that the EnKF is superior to the direct insertion assimilation scheme, for hourly observations, with retrieval of the soil moisture profile being achieved in 16 h as compared to 12 days or more. For daily observations, the true soil moisture profile is achieved in about 15 days with the EnKF, but it is impossible to approximate the true moisture within 18 days by using direct insertion. It is also found that observation depth does not have a significant effect on profile retrieval time for the EnKF. The nonlinearities have some negative influence on the optimal estimates of soil moisture profile but not very seriously.

  4. Neuromorphic Kalman filter implementation in IBM’s TrueNorth

    NASA Astrophysics Data System (ADS)

    Carney, R.; Bouchard, K.; Calafiura, P.; Clark, D.; Donofrio, D.; Garcia-Sciveres, M.; Livezey, J.

    2017-10-01

    Following the advent of a post-Moore’s law field of computation, novel architectures continue to emerge. With composite, multi-million connection neuromorphic chips like IBM’s TrueNorth, neural engineering has now become a feasible technology in this novel computing paradigm. High Energy Physics experiments are continuously exploring new methods of computation and data handling, including neuromorphic, to support the growing challenges of the field and be prepared for future commodity computing trends. This work details the first instance of a Kalman filter implementation in IBM’s neuromorphic architecture, TrueNorth, for both parallel and serial spike trains. The implementation is tested on multiple simulated systems and its performance is evaluated with respect to an equivalent non-spiking Kalman filter. The limits of the implementation are explored whilst varying the size of weight and threshold registers, the number of spikes used to encode a state, size of neuron block for spatial encoding, and neuron potential reset schemes.

  5. Classifier performance prediction for computer-aided diagnosis using a limited dataset.

    PubMed

    Sahiner, Berkman; Chan, Heang-Ping; Hadjiiski, Lubomir

    2008-04-01

    In a practical classifier design problem, the true population is generally unknown and the available sample is finite-sized. A common approach is to use a resampling technique to estimate the performance of the classifier that will be trained with the available sample. We conducted a Monte Carlo simulation study to compare the ability of the different resampling techniques in training the classifier and predicting its performance under the constraint of a finite-sized sample. The true population for the two classes was assumed to be multivariate normal distributions with known covariance matrices. Finite sets of sample vectors were drawn from the population. The true performance of the classifier is defined as the area under the receiver operating characteristic curve (AUC) when the classifier designed with the specific sample is applied to the true population. We investigated methods based on the Fukunaga-Hayes and the leave-one-out techniques, as well as three different types of bootstrap methods, namely, the ordinary, 0.632, and 0.632+ bootstrap. The Fisher's linear discriminant analysis was used as the classifier. The dimensionality of the feature space was varied from 3 to 15. The sample size n2 from the positive class was varied between 25 and 60, while the number of cases from the negative class was either equal to n2 or 3n2. Each experiment was performed with an independent dataset randomly drawn from the true population. Using a total of 1000 experiments for each simulation condition, we compared the bias, the variance, and the root-mean-squared error (RMSE) of the AUC estimated using the different resampling techniques relative to the true AUC (obtained from training on a finite dataset and testing on the population). Our results indicated that, under the study conditions, there can be a large difference in the RMSE obtained using different resampling methods, especially when the feature space dimensionality is relatively large and the sample size is small. Under this type of conditions, the 0.632 and 0.632+ bootstrap methods have the lowest RMSE, indicating that the difference between the estimated and the true performances obtained using the 0.632 and 0.632+ bootstrap will be statistically smaller than those obtained using the other three resampling methods. Of the three bootstrap methods, the 0.632+ bootstrap provides the lowest bias. Although this investigation is performed under some specific conditions, it reveals important trends for the problem of classifier performance prediction under the constraint of a limited dataset.

  6. 75 FR 81789 - Third Party Testing for Certain Children's Products; Full-Size Baby Cribs and Non-Full-Size Baby...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-28

    ... sufficient samples of the product, or samples that are identical in all material respects to the product. The... 1220, Safety Standards for Full-Size Baby Cribs and Non-Full- Size Baby Cribs. A true copy, in English... assessment bodies seeking accredited status must submit to the Commission copies, in English, of their...

  7. Electrokinetic mixing at high zeta potentials: ionic size effects on cross stream diffusion.

    PubMed

    Ahmadian Yazdi, Alireza; Sadeghi, Arman; Saidi, Mohammad Hassan

    2015-03-15

    The electrokinetic phenomena at high zeta potentials may show several unique features which are not normally observed. One of these features is the ionic size (steric) effect associated with the solutions of high ionic concentration. In the present work, attention is given to the influences of finite ionic size on the cross stream diffusion process in an electrokinetically actuated Y-shaped micromixer. The method consists of a finite difference based numerical approach for non-uniform grid which is applied to the dimensionless form of the governing equations, including the modified Poisson-Boltzmann equation. The results reveal that, neglecting the ionic size at high zeta potentials gives rise to the overestimation of the mixing length, because the steric effects retard liquid flow, thereby enhancing the mixing efficiency. The importance of steric effects is found to be more intense for channels of smaller width to height ratio. It is also observed that, in sharp contrast to the conditions that the ions are treated as point charges, increasing the zeta potential improves the cross stream diffusion when incorporating the ionic size. Moreover, increasing the EDL thickness decreases the mixing length, whereas the opposite is true for the channel aspect ratio. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Quantitative radiomics: impact of stochastic effects on textural feature analysis implies the need for standards

    PubMed Central

    Nyflot, Matthew J.; Yang, Fei; Byrd, Darrin; Bowen, Stephen R.; Sandison, George A.; Kinahan, Paul E.

    2015-01-01

    Abstract. Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes. PMID:26251842

  9. Quantitative radiomics: impact of stochastic effects on textural feature analysis implies the need for standards.

    PubMed

    Nyflot, Matthew J; Yang, Fei; Byrd, Darrin; Bowen, Stephen R; Sandison, George A; Kinahan, Paul E

    2015-10-01

    Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes.

  10. Interpreting ecological diversity indices applied to terminal restriction fragment length polymorphism data: insights from simulated microbial communities.

    PubMed

    Blackwood, Christopher B; Hudleston, Deborah; Zak, Donald R; Buyer, Jeffrey S

    2007-08-01

    Ecological diversity indices are frequently applied to molecular profiling methods, such as terminal restriction fragment length polymorphism (T-RFLP), in order to compare diversity among microbial communities. We performed simulations to determine whether diversity indices calculated from T-RFLP profiles could reflect the true diversity of the underlying communities despite potential analytical artifacts. These include multiple taxa generating the same terminal restriction fragment (TRF) and rare TRFs being excluded by a relative abundance (fluorescence) threshold. True community diversity was simulated using the lognormal species abundance distribution. Simulated T-RFLP profiles were generated by assigning each species a TRF size based on an empirical or modeled TRF size distribution. With a typical threshold (1%), the only consistently useful relationship was between Smith and Wilson evenness applied to T-RFLP data (TRF-E(var)) and true Shannon diversity (H'), with correlations between 0.71 and 0.81. TRF-H' and true H' were well correlated in the simulations using the lowest number of species, but this correlation declined substantially in simulations using greater numbers of species, to the point where TRF-H' cannot be considered a useful statistic. The relationships between TRF diversity indices and true indices were sensitive to the relative abundance threshold, with greatly improved correlations observed using a 0.1% threshold, which was investigated for comparative purposes but is not possible to consistently achieve with current technology. In general, the use of diversity indices on T-RFLP data provides inaccurate estimates of true diversity in microbial communities (with the possible exception of TRF-E(var)). We suggest that, where significant differences in T-RFLP diversity indices were found in previous work, these should be reinterpreted as a reflection of differences in community composition rather than a true difference in community diversity.

  11. Replication Validity of Initial Association Studies: A Comparison between Psychiatry, Neurology and Four Somatic Diseases.

    PubMed

    Dumas-Mallet, Estelle; Button, Katherine; Boraud, Thomas; Munafo, Marcus; Gonon, François

    2016-01-01

    There are growing concerns about effect size inflation and replication validity of association studies, but few observational investigations have explored the extent of these problems. Using meta-analyses to measure the reliability of initial studies and explore whether this varies across biomedical domains and study types (cognitive/behavioral, brain imaging, genetic and "others"). We analyzed 663 meta-analyses describing associations between markers or risk factors and 12 pathologies within three biomedical domains (psychiatry, neurology and four somatic diseases). We collected the effect size, sample size, publication year and Impact Factor of initial studies, largest studies (i.e., with the largest sample size) and the corresponding meta-analyses. Initial studies were considered as replicated if they were in nominal agreement with meta-analyses and if their effect size inflation was below 100%. Nominal agreement between initial studies and meta-analyses regarding the presence of a significant effect was not better than chance in psychiatry, whereas it was somewhat better in neurology and somatic diseases. Whereas effect sizes reported by largest studies and meta-analyses were similar, most of those reported by initial studies were inflated. Among the 256 initial studies reporting a significant effect (p<0.05) and paired with significant meta-analyses, 97 effect sizes were inflated by more than 100%. Nominal agreement and effect size inflation varied with the biomedical domain and study type. Indeed, the replication rate of initial studies reporting a significant effect ranged from 6.3% for genetic studies in psychiatry to 86.4% for cognitive/behavioral studies. Comparison between eight subgroups shows that replication rate decreases with sample size and "true" effect size. We observed no evidence of association between replication rate and publication year or Impact Factor. The differences in reliability between biological psychiatry, neurology and somatic diseases suggest that there is room for improvement, at least in some subdomains.

  12. The risk of stanford type-A aortic dissection with different tear size and location: a numerical study.

    PubMed

    Shi, Yue; Zhu, Minjia; Chang, Yu; Qiao, Huanyu; Liu, Yongmin

    2016-12-28

    This study is to investigate the influence of hemodynamics on Stanford type-A aortic dissection with different tear size and location, to provide some support for the relationships between the risks (rupture, reverse tearing and further tearing) and tear size and location for clinical treatment. Four numerical models of Stanford type-A aortic dissection were established, with different size and location of the tears. The ratio of the area between the entry and re-entry tears(RA) is various within the model; while, the size and the location of the re-entry in the distal descending aorta are fixed. In model A11 and A21, the entry tears are located near the ascending aorta. The RA in these models are 1 and 2, respectively; In the model B11 and B21, the entry tears are located near the proximal descending aorta and the RA in these models are again assigned to 1 and 2, respectively. Then hemodynamics in these models was solved with numerically and the flow patterns and loading distributions were investigated. The flow velocity of the true lumen in model A21, B21 is lower than that in A11, B11, respectively; the time-averaged wall shear stress (TAWSS) of the false lumen in model A21 and B21 is higher, and for ascending aorta false lumen, A11, A21 are higher than B11, B21, respectively. False lumen intimal wall pressure of A11, A21 are always higher than the true lumen ones. The variation of the RA can significantly affect the dynamics of blood within the aortic dissection. When the entry tear size is larger than the re-entry tear ones, the false lumen, proximal descending aorta and the wall near re-entry tear are prone to cracking. Entry tear location can significantly alter the hemodynamics of aortic dissection as well. When entry tear location is closer to proximal ascending aorta, false lumen continues to expand and compress the true lumen resulting in the true lumen reduction. For proximal ascending aorta, high pressure in false lumen predicts a higher risk of reverse tear.

  13. Structure formation in grade 20 steel during equal-channel angular pressing and subsequent heating

    NASA Astrophysics Data System (ADS)

    Dobatkin, S. V.; Odesskii, P. D.; Raab, G. I.; Tyutin, M. R.; Rybalchenko, O. V.

    2016-11-01

    The structure formation and the mechanical properties of quenched and tempered grade 20 steel after equal-channel angular pressing (ECAP) at various true strains and 400°C are studied. Electron microscopy analysis after ECAP shows a partially submicrocrystalline and partially subgrain structure with a structural element size of 340-375 nm. The structural element size depends on the region in which the elements are formed (polyhedral ferrite, needle-shaped ferrite, tempered martensite, and pearlite). Heating of the steel after ECAP at 400 and 450°C increases the fraction of high-angle boundaries and the structural ferrite element size to 360-450 nm. The fragmentation and spheroidization of cementite lamellae of pearlite and subgrain coalescence in the regions of needle-shaped ferrite and tempered martensite take place at a high ECAP true strain and heating temperature. Structural refinement ensures considerable strengthening, namely, UTS 742-871 MPa at EL 11-15.3%. The strength slightly increases, whereas the plasticity slightly decreases when the true strain increases during ECAP. After ECAP and heating, the strength and plastic properties of the grade 20 steel remain almost the same.

  14. A study of the effectiveness and energy efficiency of ultrasonic emulsification.

    PubMed

    Li, Wu; Leong, Thomas S H; Ashokkumar, Muthupandian; Martin, Gregory J O

    2017-12-20

    Three essential experimental parameters in the ultrasonic emulsification process, namely sonication time, acoustic amplitude and processing volume, were individually investigated, theoretically and experimentally, and correlated to the emulsion droplet sizes produced. The results showed that with a decrease in droplet size, two kinetic regions can be separately correlated prior to reaching a steady state droplet size: a fast size reduction region and a steady state transition region. In the fast size reduction region, the power input and sonication time could be correlated to the volume-mean diameter by a power-law relationship, with separate power-law indices of -1.4 and -1.1, respectively. A proportional relationship was found between droplet size and processing volume. The effectiveness and energy efficiency of droplet size reduction was compared between ultrasound and high-pressure homogenisation (HPH) based on both the effective power delivered to the emulsion and the total electric power consumed. Sonication could produce emulsions across a broad range of sizes, while high-pressure homogenisation was able to produce emulsions at the smaller end of the range. For ultrasonication, the energy efficiency was higher at increased power inputs due to more effective droplet breakage at high ultrasound intensities. For HPH the consumed energy efficiency was improved by operating at higher pressures for fewer passes. At the laboratory scale, the ultrasound system required less electrical power than HPH to produce an emulsion of comparable droplet size. The energy efficiency of HPH is greatly improved at large scale, which may also be true for larger scale ultrasonic reactors.

  15. High resolution Talbot self-imaging applied to structural characterization of self-assembled monolayers of microspheres.

    PubMed

    Garcia-Sucerquia, J; Alvarez-Palacio, D C; Kreuzer, H J

    2008-09-10

    We report the observation of the Talbot self-imaging effect in high resolution digital in-line holographic microscopy (DIHM) and its application to structural characterization of periodic samples. Holograms of self-assembled monolayers of micron-sized polystyrene spheres are reconstructed at different image planes. The point-source method of DIHM and the consequent high lateral resolution allows the true image (object) plane to be identified. The Talbot effect is then exploited to improve the evaluation of the pitch of the assembly and to examine defects in its periodicity.

  16. Photo series for quantifying forest residues in the: sierra mixed conifer type, sierra true fir type.

    Treesearch

    W.G. Maxwell; F.R. Ward

    1979-01-01

    Five series of photographs display different forest residue loading levels, by size classes, for areas of like timber type (Sierra mixed conifer and Sierra true fir) and cutting objective. Information with each photo includes measured weights, volumes and other residue data, information about the timber stand and harvest actions, and assessment of fire behavior and...

  17. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    PubMed

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Population and hierarchy of active species in gold iron oxide catalysts for carbon monoxide oxidation.

    PubMed

    He, Qian; Freakley, Simon J; Edwards, Jennifer K; Carley, Albert F; Borisevich, Albina Y; Mineo, Yuki; Haruta, Masatake; Hutchings, Graham J; Kiely, Christopher J

    2016-09-27

    The identity of active species in supported gold catalysts for low temperature carbon monoxide oxidation remains an unsettled debate. With large amounts of experimental evidence supporting theories of either gold nanoparticles or sub-nm gold species being active, it was recently proposed that a size-dependent activity hierarchy should exist. Here we study the diverging catalytic behaviours after heat treatment of Au/FeO x materials prepared via co-precipitation and deposition precipitation methods. After ruling out any support effects, the gold particle size distributions in different catalysts are quantitatively studied using aberration corrected scanning transmission electron microscopy (STEM). A counting protocol is developed to reveal the true particle size distribution from HAADF-STEM images, which reliably includes all the gold species present. Correlation of the populations of the various gold species present with catalysis results demonstrate that a size-dependent activity hierarchy must exist in the Au/FeO x catalyst.

  19. Finite element analysis of true and pseudo surface acoustic waves in one-dimensional phononic crystals

    NASA Astrophysics Data System (ADS)

    Graczykowski, B.; Alzina, F.; Gomis-Bresco, J.; Sotomayor Torres, C. M.

    2016-01-01

    In this paper, we report a theoretical investigation of surface acoustic waves propagating in one-dimensional phononic crystal. Using finite element method eigenfrequency and frequency response studies, we develop two model geometries suitable to distinguish true and pseudo (or leaky) surface acoustic waves and determine their propagation through finite size phononic crystals, respectively. The novelty of the first model comes from the application of a surface-like criterion and, additionally, functional damping domain. Exemplary calculated band diagrams show sorted branches of true and pseudo surface acoustic waves and their quantified surface confinement. The second model gives a complementary study of transmission, reflection, and surface-to-bulk losses of Rayleigh surface waves in the case of a phononic crystal with a finite number of periods. Here, we demonstrate that a non-zero transmission within non-radiative band gaps can be carried via leaky modes originating from the coupling of local resonances with propagating waves in the substrate. Finally, we show that the transmission, reflection, and surface-to-bulk losses can be effectively optimised by tuning the geometrical properties of a stripe.

  20. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    ERIC Educational Resources Information Center

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  1. Verification of dosimetric accuracy on the TrueBeam STx: rounded leaf effect of the high definition MLC.

    PubMed

    Kielar, Kayla N; Mok, Ed; Hsu, Annie; Wang, Lei; Luxton, Gary

    2012-10-01

    The dosimetric leaf gap (DLG) in the Varian Eclipse treatment planning system is determined during commissioning and is used to model the effect of the rounded leaf-end of the multileaf collimator (MLC). This parameter attempts to model the physical difference between the radiation and light field and account for inherent leakage between leaf tips. With the increased use of single fraction high dose treatments requiring larger monitor units comes an enhanced concern in the accuracy of leakage calculations, as it accounts for much of the patient dose. This study serves to verify the dosimetric accuracy of the algorithm used to model the rounded leaf effect for the TrueBeam STx, and describes a methodology for determining best-practice parameter values, given the novel capabilities of the linear accelerator such as flattening filter free (FFF) treatments and a high definition MLC (HDMLC). During commissioning, the nominal MLC position was verified and the DLG parameter was determined using MLC-defined field sizes and moving gap tests, as is common in clinical testing. Treatment plans were created, and the DLG was optimized to achieve less than 1% difference between measured and calculated dose. The DLG value found was tested on treatment plans for all energies (6 MV, 10 MV, 15 MV, 6 MV FFF, 10 MV FFF) and modalities (3D conventional, IMRT, conformal arc, VMAT) available on the TrueBeam STx. The DLG parameter found during the initial MLC testing did not match the leaf gap modeling parameter that provided the most accurate dose delivery in clinical treatment plans. Using the physical leaf gap size as the DLG for the HDMLC can lead to 5% differences in measured and calculated doses. Separate optimization of the DLG parameter using end-to-end tests must be performed to ensure dosimetric accuracy in the modeling of the rounded leaf ends for the Eclipse treatment planning system. The difference in leaf gap modeling versus physical leaf gap dimensions is more pronounced in the more recent versions of Eclipse for both the HDMLC and the Millennium MLC. Once properly commissioned and tested using a methodology based on treatment plan verification, Eclipse is able to accurately model radiation dose delivered for SBRT treatments using the TrueBeam STx.

  2. True posterior communicating artery aneurysms: are they more prone to rupture? A biomorphometric analysis.

    PubMed

    He, Wenzhuan; Hauptman, Jason; Pasupuleti, Latha; Setton, Avi; Farrow, Maria G; Kasper, Lydia; Karimi, Reza; Gandhi, Chirag D; Catrambone, Jeffrey E; Prestigiacomo, Charles J

    2010-03-01

    Posterior communicating artery (PCoA) aneurysms can occur at the junction with the internal carotid artery, posterior cerebral artery (PCA), or the proximal PCoA itself. Hemodynamic stressors contribute to aneurysm formation and may be associated with parent vessel size and aneurysm location. This study evaluates the correlation of various biomorphometric characteristics in 2 of the aforementioned types of PCoA aneurysms. Patients with PCoA aneurysms were analyzed using CT angiography. Source images and reconstructions were used to determine which aneurysms originated purely from the PCoA and those that originated from the internal carotid artery/PCoA junction. Morphometric analysis was performed on the aneurysm, the precommunicating segment of the PCA (P(1)), the ambient segment of the PCA (P(2)), and both PCoA arteries and were correlated to clinical presentation. Parametric and nonparametric analyses were performed to test for significance. A total of 77 PCoA aneurysms were analyzed, and 10 were found to be true PCoA aneurysms (13.0%). The ipsilateral PCoA/P(1) ratio (1.77 +/- 0.44 vs 0.82 +/- 0.46, p = 0.0001) and ipsilateral P(2)/P(1) ratio (1.73 +/- 0.40 vs 1.22 +/- 0.41, p = 0.0003) were significantly larger in true PCoA aneurysms. Interestingly, aneurysm size was statistically larger in the junctional aneurysms (0.14 +/- 0.1 vs 0.072 +/- 0.04 cm(3), p = 0.03). The prevalence of ruptured aneurysms was similar in both groups (approximately 80%, p value not significant). These data suggest that true PCoA aneurysms have a larger PCoA relative to the ipsilateral P(1) segment. To the authors' knowledge, this represents the first such biomorphometric comparison of these different types of PCoA aneurysms. Although statistically smaller in size, true PCoA aneurysms also have a similar prevalence of presenting as a ruptured aneurysm, suggesting that they might be more prone to rupture than a junctional aneurysms of similar size. Further analysis will be required to determine the biophysical factors affecting rupture rates.

  3. SU-F-T-460: Dosimetric Matching Between Trilogy Tx and TrueBeam STx

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Y; Kwak, J; Jeong, C

    Purpose: To compare the commissioned beam data for one flattening filter photon mode (6 MV) and two flattening filter-free (FFF) photon modes (6 and 10 MV-FFF) between Trilogy Tx and TrueBeam STx and evaluate the possibility of dosimetric matching Methods: Dosimetric characteristics of the new Trilogy Tx including percent depth doses (PDDs), profiles, and output factors were measured for commissioning. Linear diode array detector and ion chambers were used to measure dosimetric data. The depth of dose maximum (dmax) and PDD at 10 cm (PDD10) were evaluated: 3×3 cm{sup 2}, 10×10 cm{sup 2}, and 40×40 cm{sup 2}. The beam profilesmore » were compared and then penumbras were evaluated. As a further test of the dosimetric matching, the same VMAT plans were delivered, measured with film, and compared with TPS calculation. Results: All the measured PDDs matched well across the two units. PDD10 showed less than 0.5% variation and dmax were within 1.5 mm at the field sizes evaluated. Within the central 80% of transverse axis, profile data were almost identical. TrueBeam data resulted in a slightly greater penumbra width (up to 1.9 mm). The greatest differences of output factors were found at 40 × 40 cm{sup 2}: 2.40%, 2.03%, and 2.22% for 6 MV, 6 MV-FFF, and 10 MV-FFF, respectively. For smaller field sizes, less than 1% differences were observed. The film measurements demonstrated over 97.3% pixels passing-gamma analysis (2%/2mm). The results showed excellent agreement between measurements of two machines. Conclusion: The differences between Trilogy Tx and TrueBeam STx found could possibly affect small field and also very large field sizes in dosimetric matching considerations. These differences encountered are mostly related with the changes in the head design of the TrueBeam. Although it cannot guarantee full interchangeability of two machines, dosimetric matching by field size of 25 × 25 cm{sup 2} might be clinically acceptable.« less

  4. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    PubMed

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Replication of genetic associations as pseudoreplication due to shared genealogy.

    PubMed

    Rosenberg, Noah A; Vanliere, Jenna M

    2009-09-01

    The genotypes of individuals in replicate genetic association studies have some level of correlation due to shared descent in the complete pedigree of all living humans. As a result of this genealogical sharing, replicate studies that search for genotype-phenotype associations using linkage disequilibrium between marker loci and disease-susceptibility loci can be considered as "pseudoreplicates" rather than true replicates. We examine the size of the pseudoreplication effect in association studies simulated from evolutionary models of the history of a population, evaluating the excess probability that both of a pair of studies detect a disease association compared to the probability expected under the assumption that the two studies are independent. Each of nine combinations of a demographic model and a penetrance model leads to a detectable pseudoreplication effect, suggesting that the degree of support that can be attributed to a replicated genetic association result is less than that which can be attributed to a replicated result in a context of true independence.

  6. Replication of genetic associations as pseudoreplication due to shared genealogy

    PubMed Central

    Rosenberg, Noah A.; VanLiere, Jenna M.

    2009-01-01

    The genotypes of individuals in replicate genetic association studies have some level of correlation due to shared descent in the complete pedigree of all living humans. As a result of this genealogical sharing, replicate studies that search for genotype-phenotype associations using linkage disequilibrium between marker loci and disease-susceptibility loci can be considered “pseudoreplicates” rather than true replicates. We examine the size of the pseudoreplication effect in association studies simulated from evolutionary models of the history of a population, evaluating the excess probability that both of a pair of studies detect a disease association compared to the probability expected under the assumption that the two studies are independent. Each of nine combinations of a demographic model and a penetrance model leads to a detectable pseudoreplication effect, suggesting that the degree of support that can be attributed to a replicated genetic association result is less than that which can be attributed to a replicated result in a context of true independence. PMID:19191270

  7. A simulation study of the strength of evidence in the recommendation of medications based on two trials with statistically significant results

    PubMed Central

    Ioannidis, John P. A.

    2017-01-01

    A typical rule that has been used for the endorsement of new medications by the Food and Drug Administration is to have two trials, each convincing on its own, demonstrating effectiveness. “Convincing” may be subjectively interpreted, but the use of p-values and the focus on statistical significance (in particular with p < .05 being coined significant) is pervasive in clinical research. Therefore, in this paper, we calculate with simulations what it means to have exactly two trials, each with p < .05, in terms of the actual strength of evidence quantified by Bayes factors. Our results show that different cases where two trials have a p-value below .05 have wildly differing Bayes factors. Bayes factors of at least 20 in favor of the alternative hypothesis are not necessarily achieved and they fail to be reached in a large proportion of cases, in particular when the true effect size is small (0.2 standard deviations) or zero. In a non-trivial number of cases, evidence actually points to the null hypothesis, in particular when the true effect size is zero, when the number of trials is large, and when the number of participants in both groups is low. We recommend use of Bayes factors as a routine tool to assess endorsement of new medications, because Bayes factors consistently quantify strength of evidence. Use of p-values may lead to paradoxical and spurious decision-making regarding the use of new medications. PMID:28273140

  8. An update on the effects of playing violent video games.

    PubMed

    Anderson, Craig A

    2004-02-01

    This article presents a brief overview of existing research on the effects of exposure to violent video games. An updated meta-analysis reveals that exposure to violent video games is significantly linked to increases in aggressive behaviour, aggressive cognition, aggressive affect, and cardiovascular arousal, and to decreases in helping behaviour. Experimental studies reveal this linkage to be causal. Correlational studies reveal a linkage to serious, real-world types of aggression. Methodologically weaker studies yielded smaller effect sizes than methodologically stronger studies, suggesting that previous meta-analytic studies of violent video games underestimate the true magnitude of observed deleterious effects on behaviour, cognition, and affect.

  9. Monte Carlo simulation of TrueBeam flattening-filter-free beams using varian phase-space files: comparison with experimental data.

    PubMed

    Belosi, Maria F; Rodriguez, Miguel; Fogliata, Antonella; Cozzi, Luca; Sempau, Josep; Clivio, Alessandro; Nicolini, Giorgia; Vanetti, Eugenio; Krauss, Harald; Khamphan, Catherine; Fenoglietto, Pascal; Puxeu, Josep; Fedele, David; Mancosu, Pietro; Brualla, Lorenzo

    2014-05-01

    Phase-space files for Monte Carlo simulation of the Varian TrueBeam beams have been made available by Varian. The aim of this study is to evaluate the accuracy of the distributed phase-space files for flattening filter free (FFF) beams, against experimental measurements from ten TrueBeam Linacs. The phase-space files have been used as input in PRIMO, a recently released Monte Carlo program based on the PENELOPE code. Simulations of 6 and 10 MV FFF were computed in a virtual water phantom for field sizes 3 × 3, 6 × 6, and 10 × 10 cm(2) using 1 × 1 × 1 mm(3) voxels and for 20 × 20 and 40 × 40 cm(2) with 2 × 2 × 2 mm(3) voxels. The particles contained in the initial phase-space files were transported downstream to a plane just above the phantom surface, where a subsequent phase-space file was tallied. Particles were transported downstream this second phase-space file to the water phantom. Experimental data consisted of depth doses and profiles at five different depths acquired at SSD = 100 cm (seven datasets) and SSD = 90 cm (three datasets). Simulations and experimental data were compared in terms of dose difference. Gamma analysis was also performed using 1%, 1 mm and 2%, 2 mm criteria of dose-difference and distance-to-agreement, respectively. Additionally, the parameters characterizing the dose profiles of unflattened beams were evaluated for both measurements and simulations. Analysis of depth dose curves showed that dose differences increased with increasing field size and depth; this effect might be partly motivated due to an underestimation of the primary beam energy used to compute the phase-space files. Average dose differences reached 1% for the largest field size. Lateral profiles presented dose differences well within 1% for fields up to 20 × 20 cm(2), while the discrepancy increased toward 2% in the 40 × 40 cm(2) cases. Gamma analysis resulted in an agreement of 100% when a 2%, 2 mm criterion was used, with the only exception of the 40 × 40 cm(2) field (∼95% agreement). With the more stringent criteria of 1%, 1 mm, the agreement reduced to almost 95% for field sizes up to 10 × 10 cm(2), worse for larger fields. Unflatness and slope FFF-specific parameters are in line with the possible energy underestimation of the simulated results relative to experimental data. The agreement between Monte Carlo simulations and experimental data proved that the evaluated Varian phase-space files for FFF beams from TrueBeam can be used as radiation sources for accurate Monte Carlo dose estimation, especially for field sizes up to 10 × 10 cm(2), that is the range of field sizes mostly used in combination to the FFF, high dose rate beams.

  10. Understanding the effect of lactose particle size on the properties of DPI formulations using experimental design.

    PubMed

    Guenette, Estelle; Barrett, Andrew; Kraus, Debbie; Brody, Rachel; Harding, Ljiljana; Magee, Gavin

    2009-10-01

    Medicines for delivering therapeutic agents to the lung as dry powders primarily consist of a carrier and a micronised active pharmaceutical ingredient (API). The performance of an inhaled formulation will depend on a number of factors amongst which the particle size distribution (PSD) plays a key role. It is suggested that increasing the number of fine particles in the carrier can improve the aerosolisation of the API. In addition the effect of PSD upon a bulk powder is also broadly understood in terms of powder flow. Other aspects of functionality that different size fractions of the carrier affect are not clearly understood; for example, it is not yet clearly known how different size fractions contribute to the different functionalities of the carrier. It is the purpose of this investigation to examine the effects of different lactose size fractions on fine particle dose, formulation stability and the ability to process and fill the material in the preferred device. In order to understand the true impact of the size fractions of lactose on the performance of dry powder inhaled (DPI) products, a statistically designed study has been conducted. The study comprised various DPI blend formulations prepared using lactose monohydrate carrier systems consisting of mixtures of four size fractions. Interactive mixtures were prepared containing 1% (w/w) salbutamol sulphate. The experimental design enabled the evaluation of the effect of lactose size fractions on processing and performance attributes of the formulation. Furthermore, the results of the study demonstrate that an experimental design approach can be used successfully to support dry powder formulation development.

  11. Seasonal Influenza Forecasting in Real Time Using the Incidence Decay With Exponential Adjustment Model.

    PubMed

    Nasserie, Tahmina; Tuite, Ashleigh R; Whitmore, Lindsay; Hatchette, Todd; Drews, Steven J; Peci, Adriana; Kwong, Jeffrey C; Friedman, Dara; Garber, Gary; Gubbay, Jonathan; Fisman, David N

    2017-01-01

    Seasonal influenza epidemics occur frequently. Rapid characterization of seasonal dynamics and forecasting of epidemic peaks and final sizes could help support real-time decision-making related to vaccination and other control measures. Real-time forecasting remains challenging. We used the previously described "incidence decay with exponential adjustment" (IDEA) model, a 2-parameter phenomenological model, to evaluate the characteristics of the 2015-2016 influenza season in 4 Canadian jurisdictions: the Provinces of Alberta, Nova Scotia and Ontario, and the City of Ottawa. Model fits were updated weekly with receipt of incident virologically confirmed case counts. Best-fit models were used to project seasonal influenza peaks and epidemic final sizes. The 2015-2016 influenza season was mild and late-peaking. Parameter estimates generated through fitting were consistent in the 2 largest jurisdictions (Ontario and Alberta) and with pooled data including Nova Scotia counts (R 0 approximately 1.4 for all fits). Lower R 0 estimates were generated in Nova Scotia and Ottawa. Final size projections that made use of complete time series were accurate to within 6% of true final sizes, but final size was using pre-peak data. Projections of epidemic peaks stabilized before the true epidemic peak, but these were persistently early (~2 weeks) relative to the true peak. A simple, 2-parameter influenza model provided reasonably accurate real-time projections of influenza seasonal dynamics in an atypically late, mild influenza season. Challenges are similar to those seen with more complex forecasting methodologies. Future work includes identification of seasonal characteristics associated with variability in model performance.

  12. Accuracies of the synthesized monochromatic CT numbers and effective atomic numbers obtained with a rapid kVp switching dual energy CT scanner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Larson, Sandra C.

    2011-04-15

    Purpose: This study was performed to investigate the accuracies of the synthesized monochromatic images and effective atomic number maps obtained with the new GE Discovery CT750 HD CT scanner. Methods: A Gammex-RMI model 467 tissue characterization phantom and the CT number linearity section of a Phantom Laboratory Catphan 600 phantom were scanned using the dual energy (DE) feature on the GE CT750 HD scanner. Synthesized monochromatic images at various energies between 40 and 120 keV and effective atomic number (Z{sub eff}) maps were generated. Regions of interest were placed within these images/maps to measure the average monochromatic CT numbers andmore » average Z{sub eff} of the materials within these phantoms. The true Z{sub eff} values were either supplied by the phantom manufacturer or computed using Mayneord's equation. The linear attenuation coefficients for the true CT numbers were computed using the NIST XCOM program with the input of manufacturer supplied elemental compositions and densities. The effects of small variations in the assumed true densities of the materials were also investigated. Finally, the effect of body size on the accuracies of the synthesized monochromatic CT numbers was investigated using a custom lumbar section phantom with and without an external fat-mimicking ring. Results: Other than the Z{sub eff} of the simulated lung inserts in the tissue characterization phantom, which could not be measured by DECT, the Z{sub eff} values of all of the other materials in the tissue characterization and Catphan phantoms were accurate to 15%. The accuracies of the synthesized monochromatic CT numbers of the materials in both phantoms varied with energy and material. For the 40-120 keV range, RMS errors between the measured and true CT numbers in the Catphan are 8-25 HU when the true CT numbers were computed using the nominal plastic densities. These RMS errors improve to 3-12 HU for assumed true densities within the nominal density {+-}0.02 g/cc range. The RMS errors between the measured and true CT numbers of the tissue mimicking materials in the tissue characterization phantom over the 40-120 keV range varied from about 6 HU-248 HU and did not improve as dramatically with small changes in assumed true density. Conclusions: Initial tests indicate that the Z{sub eff} values computed with DECT on this scanner are reasonably accurate; however, the synthesized monochromatic CT numbers can be very inaccurate, especially for dense tissue mimicking materials at low energies. Furthermore, the synthesized monochromatic CT numbers of materials still depend on the amount of the surrounding tissues especially at low keV, demonstrating that the numbers are not truly monochromatic. Further research is needed to develop DE methods that produce more accurate synthesized monochromatic CT numbers.« less

  13. Perception-action dissociation generalizes to the size-inertia illusion.

    PubMed

    Platkiewicz, Jonathan; Hayward, Vincent

    2014-04-01

    Two objects of similar visual aspects and of equal mass, but of different sizes, generally do not elicit the same percept of heaviness in humans. The larger object is consistently felt to be lighter than the smaller, an effect known as the "size-weight illusion." When asked to repeatedly lift the two objects, the grip forces were observed to adapt rapidly to the true object weight while the size-weight illusion persisted, a phenomenon interpreted as a dissociation between perception and action. We investigated whether the same phenomenon can be observed if the mass of an object is available to participants through inertial rather than gravitational cues and if the number and statistics of the stimuli is such that participants cannot remember each individual stimulus. We compared the responses of 10 participants in 2 experimental conditions, where they manipulated 33 objects having uncorrelated masses and sizes, supported by a frictionless, air-bearing slide that could be oriented vertically or horizontally. We also analyzed the participants' anticipatory motor behavior by measuring the grip force before motion onset. We found that the perceptual illusory effect was quantitatively the same in the two conditions and observed that both visual size and haptic mass had a negligible effect on the anticipatory gripping control of the participants in the gravitational and inertial conditions, despite the enormous differences in the mechanics of the two conditions and the large set of uncorrelated stimuli.

  14. When stress predicts a shrinking gene pool, trading early reproduction for longevity can increase fitness, even with lower fecundity.

    PubMed

    Ratcliff, William C; Hawthorne, Peter; Travisano, Michael; Denison, R Ford

    2009-06-25

    Stresses like dietary restriction or various toxins increase lifespan in taxa as diverse as yeast, Caenorhabditis elegans, Drosophila and rats, by triggering physiological responses that also tend to delay reproduction. Food odors can reverse the effects of dietary restriction, showing that key mechanisms respond to information, not just resources. Such environmental cues can predict population trends, not just individual prospects for survival and reproduction. When population size is increasing, each offspring produced earlier makes a larger proportional contribution to the gene pool, but the reverse is true when population size is declining. We show mathematically that natural selection can favor facultative delay in reproduction when environmental cues predict a decrease in total population size, even if lifetime fecundity decreases with delay. We also show that increased reproduction from waiting for better conditions does not increase fitness (proportional representation) when the whole population benefits similarly. We conclude that the beneficial effects of stress on longevity (hormesis) in diverse taxa are a side-effect of delaying reproduction in response to environmental cues that population size is likely to decrease. The reversal by food odors of the effects of dietary restriction can be explained as a response to information that population size is less likely to decrease, reducing the chance that delaying reproduction will increase fitness.

  15. Aggregate and individual replication probability within an explicit model of the research process.

    PubMed

    Miller, Jeff; Schwarz, Wolf

    2011-09-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by obtaining either a statistically significant result in the same direction or any effect in that direction. We analyze both the probability of successfully replicating a particular experimental effect (i.e., the individual replication probability) and the average probability of successful replication across different studies within some research context (i.e., the aggregate replication probability), and we identify the conditions under which the latter can be approximated using the formulas of Killeen (2005a, 2007). We show how both of these probabilities depend on parameters of the research context that would rarely be known in practice. In addition, we show that the statistical uncertainty associated with the size of an initial observed effect would often prevent accurate estimation of the desired individual replication probability even if these research context parameters were known exactly. We conclude that accurate estimates of replication probability are generally unattainable.

  16. Investigating Reliabilities of Intraindividual Variability Indicators

    ERIC Educational Resources Information Center

    Wang, Lijuan; Grimm, Kevin J.

    2012-01-01

    Reliabilities of the two most widely used intraindividual variability indicators, "ISD[superscript 2]" and "ISD", are derived analytically. Both are functions of the sizes of the first and second moments of true intraindividual variability, the size of the measurement error variance, and the number of assessments within a burst. For comparison,…

  17. Formulation and characterization of a compacted multiparticulate system for modified release of water-soluble drugs--Part II theophylline and cimetidine.

    PubMed

    Cantor, Stuart L; Hoag, Stephen W; Augsburger, Larry L

    2009-05-01

    The purpose was to investigate the effectiveness of an ethylcellulose (EC) bead matrix and different film-coating polymers in delaying drug release from compacted multiparticulate systems. Formulations containing theophylline or cimetidine granulated with Eudragit RS 30D were developed and beads were produced by extrusion-spheronization. Drug beads were coated using 15% wt/wt Surelease or Eudragit NE 30D and were evaluated for true density, particle size, and sphericity. Lipid-based placebo beads and drug beads were blended together and compacted on an instrumented Stokes B2 rotary tablet press. Although placebo beads were significantly less spherical, their true density of 1.21 g/cm(3) and size of 855 mum were quite close to Surelease-coated drug beads. Curing improved the crushing strength and friability values for theophylline tablets containing Surelease-coated beads; 5.7 +/- 1.0 kP and 0.26 +/- 0.07%, respectively. Dissolution profiles showed that the EC matrix only provided 3 h of drug release. Although tablets containing Surelease-coated theophylline beads released drug fastest overall (t(44.2%) = 8 h), profiles showed that coating damage was still minimal. Size and density differences indicated a minimal segregation potential during tableting for blends containing Surelease-coated drug beads. Although modified release profiles >8 h were achievable in tablets for both drugs using either coating polymer, Surelease-coated theophylline beads released drug fastest overall. This is likely because of the increased solubility of theophylline and the intrinsic properties of the Surelease films. Furthermore, the lipid-based placebos served as effective cushioning agents by protecting coating integrity of drug beads under a number of different conditions while tableting.

  18. Calibrating abundance indices with population size estimators of red back salamanders (Plethodon cinereus) in a New England forest

    PubMed Central

    Ellison, Aaron M.; Jackson, Scott

    2015-01-01

    Herpetologists and conservation biologists frequently use convenient and cost-effective, but less accurate, abundance indices (e.g., number of individuals collected under artificial cover boards or during natural objects surveys) in lieu of more accurate, but costly and destructive, population size estimators to detect and monitor size, state, and trends of amphibian populations. Although there are advantages and disadvantages to each approach, reliable use of abundance indices requires that they be calibrated with accurate population estimators. Such calibrations, however, are rare. The red back salamander, Plethodon cinereus, is an ecologically useful indicator species of forest dynamics, and accurate calibration of indices of salamander abundance could increase the reliability of abundance indices used in monitoring programs. We calibrated abundance indices derived from surveys of P. cinereus under artificial cover boards or natural objects with a more accurate estimator of their population size in a New England forest. Average densities/m2 and capture probabilities of P. cinereus under natural objects or cover boards in independent, replicate sites at the Harvard Forest (Petersham, Massachusetts, USA) were similar in stands dominated by Tsuga canadensis (eastern hemlock) and deciduous hardwood species (predominantly Quercus rubra [red oak] and Acer rubrum [red maple]). The abundance index based on salamanders surveyed under natural objects was significantly associated with density estimates of P. cinereus derived from depletion (removal) surveys, but underestimated true density by 50%. In contrast, the abundance index based on cover-board surveys overestimated true density by a factor of 8 and the association between the cover-board index and the density estimates was not statistically significant. We conclude that when calibrated and used appropriately, some abundance indices may provide cost-effective and reliable measures of P. cinereus abundance that could be used in conservation assessments and long-term monitoring at Harvard Forest and other northeastern USA forests. PMID:26020008

  19. The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.

    Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

  20. The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

    DOE PAGES

    Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.

    2017-11-27

    Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

  1. A material-sparing method for simultaneous determination of true density and powder compaction properties--aspartame as an example.

    PubMed

    Sun, Changquan Calvin

    2006-12-01

    True density results for a batch of commercial aspartame are highly variable when helium pycnometry is used. Alternatively, the true density of the problematic aspartame lot was obtained by fitting tablet density versus pressure data. The fitted true density was in excellent agreement with that predicted from single crystal structure. Tablet porosity was calculated from the true density and tablet apparent density. After making the necessary measurements for calculating tablet apparent density, the breaking force of each intact tablet was measured and tensile strength was calculated. With the knowledge of compaction pressure, tablet porosity and tensile strength, powder compaction properties were characterized using tabletability (tensile strength versus pressure), compactibility (tensile strength versus porosity), compressibility (porosity versus pressure) and Heckel analysis. Thus, a wealth of additional information on the compaction properties of the powder was obtained through little added work. A total of approximately 4 g of powder was used in this study. Depending on the size of tablet tooling, tablet thickness and true density, 2-10 g of powder would be sufficient for characterizing most pharmaceutical powders.

  2. Observations on the brittle to ductile transition temperatures of B2 nickel aluminides with and without zirconium

    NASA Technical Reports Server (NTRS)

    Raj, S. V.; Noebe, R. D.; Bowman, R.

    1989-01-01

    The effect of a zirconium addition (0.05 at. pct) to a stoichiometric NiAl alloy on the brittle-to-ductile transition temperature (BDTT) of this alloy was investigated. Constant velocity tensile tests were conducted to fracture between 300 and 1100 K under initial strain rate 0.00014/sec, and the true stress and true strain values were determined from plots of load vs time after subtracting the elastic strain. The inelastic strain was measured under a traveling microscope. Microstructural characterization of as-extruded and fractured specimens was carried out by SEM and TEM. It was found that, while the addition of 0.05 at. pct Zr strengthened the NiAl alloy, it increased its BDTT; this shift in the BDTT could not be attributed either to variations in grain size or to impurity contents. Little or no room-temperature ductility was observed for either alloy.

  3. Light irradiation induces fragmentation of the plasmodium, a novel photomorphogenesis in the true slime mold Physarum polycephalum: action spectra and evidence for involvement of the phytochrome.

    PubMed

    Kakiuchi, Y; Takahashi, T; Murakami, A; Ueda, T

    2001-03-01

    A new photomorphogenesis was found in the plasmodium of the true slime mold Physarum polycephalum: the plasmodium broke temporarily into equal-sized spherical pieces, each containing about eight nuclei, about 5 h after irradiation with light. Action spectroscopic study showed that UVA, blue and far-red lights were effective, while red light inhibited the far-red-induced fragmentation. Difference absorption spectra of both the living plasmodium and the plasmodial homogenate after alternate irradiation with far-red and red light gave two extremes at 750 and 680 nm, which agreed with those for the induction and inhibition of the fragmentation, respectively. A kinetic model similar to that of phytochrome action explained quantitatively the fluence rate-response curves of the fragmentation. Our results indicate that one of the photoreceptors for the plasmodial fragmentation is a phytochrome.

  4. Seasonal Influenza Forecasting in Real Time Using the Incidence Decay With Exponential Adjustment Model

    PubMed Central

    Nasserie, Tahmina; Tuite, Ashleigh R; Whitmore, Lindsay; Hatchette, Todd; Drews, Steven J; Peci, Adriana; Kwong, Jeffrey C; Friedman, Dara; Garber, Gary; Gubbay, Jonathan

    2017-01-01

    Abstract Background Seasonal influenza epidemics occur frequently. Rapid characterization of seasonal dynamics and forecasting of epidemic peaks and final sizes could help support real-time decision-making related to vaccination and other control measures. Real-time forecasting remains challenging. Methods We used the previously described “incidence decay with exponential adjustment” (IDEA) model, a 2-parameter phenomenological model, to evaluate the characteristics of the 2015–2016 influenza season in 4 Canadian jurisdictions: the Provinces of Alberta, Nova Scotia and Ontario, and the City of Ottawa. Model fits were updated weekly with receipt of incident virologically confirmed case counts. Best-fit models were used to project seasonal influenza peaks and epidemic final sizes. Results The 2015–2016 influenza season was mild and late-peaking. Parameter estimates generated through fitting were consistent in the 2 largest jurisdictions (Ontario and Alberta) and with pooled data including Nova Scotia counts (R0 approximately 1.4 for all fits). Lower R0 estimates were generated in Nova Scotia and Ottawa. Final size projections that made use of complete time series were accurate to within 6% of true final sizes, but final size was using pre-peak data. Projections of epidemic peaks stabilized before the true epidemic peak, but these were persistently early (~2 weeks) relative to the true peak. Conclusions A simple, 2-parameter influenza model provided reasonably accurate real-time projections of influenza seasonal dynamics in an atypically late, mild influenza season. Challenges are similar to those seen with more complex forecasting methodologies. Future work includes identification of seasonal characteristics associated with variability in model performance. PMID:29497629

  5. A Monte-Carlo simulation analysis for evaluating the severity distribution functions (SDFs) calibration methodology and determining the minimum sample-size requirements.

    PubMed

    Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique

    2017-01-01

    Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Clarification of effects of DDE on shell thickness, size, mass, and shape of avian eggs

    USGS Publications Warehouse

    Blus, Lawrence J.; Wiemeyer, Stanley N.; Bunck, Christine M.

    1997-01-01

    Moriarty et al. (1986) used field data to conclude that DDE decreased the size or altered the shape of avian eggs; therefore, they postulated that decreased eggshell thickness was a secondary effect because, as a general rule, thickness and egg size are positively correlated. To further test this relationship, the present authors analyzed data from eggs of captive American kestrels. Falco sparverius given DDT- or DDE-contaminated or clean diets and from wild brown pelicans Pelecanus occidentalis collected both before (pre-1946) and after (post-1945) DDT was introduced into the environment. Pertinent data from other field and laboratory studies were also summarized. DDE was not related to and did not affect size, mass, or shape of eggs of the brown pelican or American kestrel; but the relationship of DDE to eggshell thinning held true. Size and shape of eggs of brown pelicans from the post-1945 era and those of kestrels, on DDT-contaminated diets showed some significant, but inconsistent, changes compared to brown pelican data from the pre-1946 era or kestrels on clean diets. In contrast, nearly all samples of eggs of experimental kestrels given DDT-contaminated diets and those of wild brown pelicans from the post-1945 era exhibited significant eggshell thinning. Pertinent experimental studies with other sensitive avian species indicated no effects of DDE on the size or shape of eggs, even though the high dietary concentrations caused extreme eggshell thinning and mortality of some adult mallards (Anas platyrhynchos) in one study. These findings essentially controvert the argument that decreased eggshell thickness is a secondary effect resulting from the primary effect of DDE-induced changes in the size or shape of eggs.

  7. Finite element analysis of true and pseudo surface acoustic waves in one-dimensional phononic crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graczykowski, B., E-mail: bartlomiej.graczykowski@icn.cat; Alzina, F.; Gomis-Bresco, J.

    In this paper, we report a theoretical investigation of surface acoustic waves propagating in one-dimensional phononic crystal. Using finite element method eigenfrequency and frequency response studies, we develop two model geometries suitable to distinguish true and pseudo (or leaky) surface acoustic waves and determine their propagation through finite size phononic crystals, respectively. The novelty of the first model comes from the application of a surface-like criterion and, additionally, functional damping domain. Exemplary calculated band diagrams show sorted branches of true and pseudo surface acoustic waves and their quantified surface confinement. The second model gives a complementary study of transmission, reflection,more » and surface-to-bulk losses of Rayleigh surface waves in the case of a phononic crystal with a finite number of periods. Here, we demonstrate that a non-zero transmission within non-radiative band gaps can be carried via leaky modes originating from the coupling of local resonances with propagating waves in the substrate. Finally, we show that the transmission, reflection, and surface-to-bulk losses can be effectively optimised by tuning the geometrical properties of a stripe.« less

  8. Estimating Effects of Species Interactions on Populations of Endangered Species.

    PubMed

    Roth, Tobias; Bühler, Christoph; Amrhein, Valentin

    2016-04-01

    Global change causes community composition to change considerably through time, with ever-new combinations of interacting species. To study the consequences of newly established species interactions, one available source of data could be observational surveys from biodiversity monitoring. However, approaches using observational data would need to account for niche differences between species and for imperfect detection of individuals. To estimate population sizes of interacting species, we extended N-mixture models that were developed to estimate true population sizes in single species. Simulations revealed that our model is able to disentangle direct effects of dominant on subordinate species from indirect effects of dominant species on detection probability of subordinate species. For illustration, we applied our model to data from a Swiss amphibian monitoring program and showed that sizes of expanding water frog populations were negatively related to population sizes of endangered yellow-bellied toads and common midwife toads and partly of natterjack toads. Unlike other studies that analyzed presence and absence of species, our model suggests that the spread of water frogs in Central Europe is one of the reasons for the decline of endangered toad species. Thus, studying population impacts of dominant species on population sizes of endangered species using data from biodiversity monitoring programs should help to inform conservation policy and to decide whether competing species should be subject to population management.

  9. Population and hierarchy of active species in gold iron oxide catalysts for carbon monoxide oxidation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Qian; Freakley, Simon J.; Edwards, Jennifer K.

    The identity of active species in supported gold catalysts for low temperature carbon monoxide oxidation remains an unsettled debate. With large amounts of experimental evidence supporting theories of either gold nanoparticles or sub-nm gold species being active, it was recently proposed that a size-dependent activity hierarchy should exist. Here we study the diverging catalytic behaviors after heat treatment of Au/FeO x materials prepared via co-precipitation and deposition precipitation methods. After ruling out any support effects, the gold particle size distributions in different catalysts are quantitatively studied using aberration corrected scanning transmission electron microscopy (STEM). A counting protocol is developed tomore » reveal the true particle size distribution from HAADF-STEM images, which reliably includes all the gold species present. As a result, correlation of the populations of the various gold species present with catalysis results demonstrate that a size-dependent activity hierarchy must exist in the Au/FeO x catalyst.« less

  10. Population and hierarchy of active species in gold iron oxide catalysts for carbon monoxide oxidation

    DOE PAGES

    He, Qian; Freakley, Simon J.; Edwards, Jennifer K.; ...

    2016-09-27

    The identity of active species in supported gold catalysts for low temperature carbon monoxide oxidation remains an unsettled debate. With large amounts of experimental evidence supporting theories of either gold nanoparticles or sub-nm gold species being active, it was recently proposed that a size-dependent activity hierarchy should exist. Here we study the diverging catalytic behaviors after heat treatment of Au/FeO x materials prepared via co-precipitation and deposition precipitation methods. After ruling out any support effects, the gold particle size distributions in different catalysts are quantitatively studied using aberration corrected scanning transmission electron microscopy (STEM). A counting protocol is developed tomore » reveal the true particle size distribution from HAADF-STEM images, which reliably includes all the gold species present. As a result, correlation of the populations of the various gold species present with catalysis results demonstrate that a size-dependent activity hierarchy must exist in the Au/FeO x catalyst.« less

  11. Unbiased estimates of galaxy scaling relations from photometric redshift surveys

    NASA Astrophysics Data System (ADS)

    Rossi, Graziano; Sheth, Ravi K.

    2008-06-01

    Many physical properties of galaxies correlate with one another, and these correlations are often used to constrain galaxy formation models. Such correlations include the colour-magnitude relation, the luminosity-size relation, the fundamental plane, etc. However, the transformation from observable (e.g. angular size, apparent brightness) to physical quantity (physical size, luminosity) is often distance dependent. Noise in the distance estimate will lead to biased estimates of these correlations, thus compromising the ability of photometric redshift surveys to constrain galaxy formation models. We describe two methods which can remove this bias. One is a generalization of the Vmax method, and the other is a maximum-likelihood approach. We illustrate their effectiveness by studying the size-luminosity relation in a mock catalogue, although both methods can be applied to other scaling relations as well. We show that if one simply uses photometric redshifts one obtains a biased relation; our methods correct for this bias and recover the true relation.

  12. Influence of jet milling and particle size on the composition, physicochemical and mechanical properties of barley and rye flours.

    PubMed

    Drakos, Antonios; Kyriakakis, Georgios; Evageliou, Vasiliki; Protonotariou, Styliani; Mandala, Ioanna; Ritzoulis, Christos

    2017-01-15

    Finer barley and rye flours were produced by jet milling at two feed rates. The effect of reduced particle size on composition and several physicochemical and mechanical properties of all flours were evaluated. Moisture content decreased as the size of the granules decreased. Differences on ash and protein contents were observed. Jet milling increased the amount of damaged starch in both rye and barley flours. True density increased with decreased particle size whereas porosity and bulk density increased. The solvent retention capacity profile was also affected by jet milling. Barley was richer in phenolics and had greater antioxidant activity than rye. Regarding colour, both rye and barley flours when subjected to jet milling became brighter, whereas their yellowness was not altered significantly. The minimum gelation concentration for all flours was 16%w/v. Barley flour gels were stronger, firmer and more elastic than the rye ones. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Non-Disclosing Students with Disabilities or Learning Challenges: Characteristics and Size of a Hidden Population

    ERIC Educational Resources Information Center

    Grimes, Susan; Scevak, Jill; Southgate, Erica; Buchanan, Rachel

    2017-01-01

    Internationally, university students with disabilities (SWD) are recognised as being under-represented in higher education. They face significant problems accessing appropriate accommodations for their disability. Academic outcomes for this group are lower in terms of achievement and graduation rates. The true size of the SWD group at university…

  14. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    PubMed

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  15. Growth hormone and bone health.

    PubMed

    Bex, Marie; Bouillon, Roger

    2003-01-01

    Growth hormone (GH) and insulin-like growth factor-I have major effects on growth plate chondrocytes and all bone cells. Untreated childhood-onset GH deficiency (GHD) markedly impairs linear growth as well as three-dimensional bone size. Adult peak bone mass is therefore about 50% that of adults with normal height. This is mainly an effect on bone volume, whereas true bone mineral density (BMD; g/cm(3)) is virtually normal, as demonstrated in a large cohort of untreated Russian adults with childhood-onset GHD. The prevalence of fractures in these untreated childhood-onset GHD adults was, however, markedly and significantly increased in comparison with normal Russian adults. This clearly indicates that bone mass and bone size matter more than true bone density. Adequate treatment with GH can largely correct bone size and in several studies also bone mass, but it usually requires more than 5 years of continuous treatment. Adult-onset GHD decreases bone turnover and results in a mild deficit, generally between -0.5 and -1.0 z-score, in bone mineral content and BMD of the lumbar spine, radius and femoral neck. Cross-sectional surveys and the KIMS data suggest an increased incidence of fractures. GH replacement therapy increases bone turnover. The three controlled studies with follow-up periods of 18 and 24 months demonstrated a modest increase in BMD of the lumbar spine and femoral neck in male adults with adult-onset GHD, whereas no significant changes in BMD were observed in women. GHD, whether childhood- or adult-onset, impairs bone mass and strength. Appropriate substitution therapy can largely correct these deficiencies if given over a prolonged period. GH therapy for other bone disorders not associated with primary GHD needs further study but may well be beneficial because of its positive effects on the bone remodelling cycle. Copyright 2003 S. Karger AG, Basel

  16. Monte Carlo simulation of TrueBeam flattening-filter-free beams using Varian phase-space files: Comparison with experimental data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belosi, Maria F.; Fogliata, Antonella, E-mail: antonella.fogliata-cozzi@eoc.ch, E-mail: afc@iosi.ch; Cozzi, Luca

    2014-05-15

    Purpose: Phase-space files for Monte Carlo simulation of the Varian TrueBeam beams have been made available by Varian. The aim of this study is to evaluate the accuracy of the distributed phase-space files for flattening filter free (FFF) beams, against experimental measurements from ten TrueBeam Linacs. Methods: The phase-space files have been used as input in PRIMO, a recently released Monte Carlo program based on thePENELOPE code. Simulations of 6 and 10 MV FFF were computed in a virtual water phantom for field sizes 3 × 3, 6 × 6, and 10 × 10 cm{sup 2} using 1 × 1more » × 1 mm{sup 3} voxels and for 20 × 20 and 40 × 40 cm{sup 2} with 2 × 2 × 2 mm{sup 3} voxels. The particles contained in the initial phase-space files were transported downstream to a plane just above the phantom surface, where a subsequent phase-space file was tallied. Particles were transported downstream this second phase-space file to the water phantom. Experimental data consisted of depth doses and profiles at five different depths acquired at SSD = 100 cm (seven datasets) and SSD = 90 cm (three datasets). Simulations and experimental data were compared in terms of dose difference. Gamma analysis was also performed using 1%, 1 mm and 2%, 2 mm criteria of dose-difference and distance-to-agreement, respectively. Additionally, the parameters characterizing the dose profiles of unflattened beams were evaluated for both measurements and simulations. Results: Analysis of depth dose curves showed that dose differences increased with increasing field size and depth; this effect might be partly motivated due to an underestimation of the primary beam energy used to compute the phase-space files. Average dose differences reached 1% for the largest field size. Lateral profiles presented dose differences well within 1% for fields up to 20 × 20 cm{sup 2}, while the discrepancy increased toward 2% in the 40 × 40 cm{sup 2} cases. Gamma analysis resulted in an agreement of 100% when a 2%, 2 mm criterion was used, with the only exception of the 40 × 40 cm{sup 2} field (∼95% agreement). With the more stringent criteria of 1%, 1 mm, the agreement reduced to almost 95% for field sizes up to 10 × 10 cm{sup 2}, worse for larger fields. Unflatness and slope FFF-specific parameters are in line with the possible energy underestimation of the simulated results relative to experimental data. Conclusions: The agreement between Monte Carlo simulations and experimental data proved that the evaluated Varian phase-space files for FFF beams from TrueBeam can be used as radiation sources for accurate Monte Carlo dose estimation, especially for field sizes up to 10 × 10 cm{sup 2}, that is the range of field sizes mostly used in combination to the FFF, high dose rate beams.« less

  17. The Density of Mid-sized Kuiper Belt Objects from ALMA Thermal Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Michael E.; Butler, Bryan J.

    The densities of mid-sized Kuiper Belt objects (KBOs) are a key constraint in understanding the assembly of objects in the outer solar system. These objects are critical for understanding the currently unexplained transition from the smallest KBOs with densities lower than that of water, to the largest objects with significant rock content. Mapping this transition is made difficult by the uncertainties in the diameters of these objects, which maps into an even larger uncertainty in volume and thus density. The substantial collecting area of the Atacama Large Millimeter Array allows significantly more precise measurements of thermal emission from outer solarmore » system objects and could potentially greatly improve the density measurements. Here we use new thermal observations of four objects with satellites to explore the improvements possible with millimeter data. We find that effects due to effective emissivity at millimeter wavelengths make it difficult to use the millimeter data directly to find diameters and thus volumes for these bodies. In addition, we find that when including the effects of model uncertainty, the true uncertainties on the sizes of outer solar system objects measured with radiometry are likely larger than those previously published. Substantial improvement in object sizes will likely require precise occultation measurements.« less

  18. Effect of set size, age, and mode of stimulus presentation on information-processing speed.

    NASA Technical Reports Server (NTRS)

    Norton, J. C.

    1972-01-01

    First, second, and third grade pupils served as subjects in an experiment designed to show the effect of age, mode of stimulus presentation, and information value on recognition time. Stimuli were presented in picture and printed word form and in groups of 2, 4, and 8. The results of the study indicate that first graders are slower than second and third graders who are nearly equal. There is a gross shift in reaction time as a function of mode of stimulus presentation with increase in age. The first graders take much longer to identify words than pictures, while the reverse is true of the older groups. With regard to set size, a slope appears in the pictures condition in the older groups, while for first graders, a large slope occurs in the words condition and only a much smaller one for pictures.

  19. Effects of Hot Rolling on Low-Cycle Fatigue Properties of Zn-22 wt.% Al Alloy at Room Temperature

    NASA Astrophysics Data System (ADS)

    Dong, X. H.; Cao, Q. D.; Ma, S. J.; Han, S. H.; Tang, W.; Zhang, X. P.

    2016-09-01

    The effects of the reduction ratio (RR) on the low-cycle fatigue (LCF) properties of the Zn-22 wt.% Al (Zn-22Al) alloy were investigated. Various grain sizes from 0.68 to 1.13 μm were obtained by controlled RRs. Tensile and LCF tests were carried out at room temperature. Superplasticity and cyclic softening were observed. Strength and ductility of the rolled Zn-22Al alloy increased with the RR, owing to the decrease in its grain size. The RR did not affect the cyclic softening behavior of the alloy. The fatigue life of the alloy decreased with increasing strain amplitude, while the fatigue life first decreased and then increased with increasing RR. The longest fatigue life was observed for the alloy rolled at a RR of 60%. A bilinear Coffin-Manson relationship was observed to hold true for this alloy.

  20. Effects of grain size distribution on the packing fraction and shear strength of frictionless disk packings.

    PubMed

    Estrada, Nicolas

    2016-12-01

    Using discrete element methods, the effects of the grain size distribution on the density and the shear strength of frictionless disk packings are analyzed. Specifically, two recent findings on the relationship between the system's grain size distribution and its rheology are revisited, and their validity is tested across a broader range of distributions than what has been used in previous studies. First, the effects of the distribution on the solid fraction are explored. It is found that the distribution that produces the densest packing is not the uniform distribution by volume fractions as suggested in a recent publication. In fact, the maximal packing fraction is obtained when the grading curve follows a power law with an exponent close to 0.5 as suggested by Fuller and Thompson in 1907 and 1919 [Trans Am. Soc. Civ. Eng. 59, 1 (1907) and A Treatise on Concrete, Plain and Reinforced (1919), respectively] while studying mixtures of cement and stone aggregates. Second, the effects of the distribution on the shear strength are analyzed. It is confirmed that these systems exhibit a small shear strength, even if composed of frictionless particles as has been shown recently in several works. It is also found that this shear strength is independent of the grain size distribution. This counterintuitive result has previously been shown for the uniform distribution by volume fractions. In this paper, it is shown that this observation keeps true for different shapes of the grain size distribution.

  1. Quantifying the potential impact of measurement error in an investigation of autism spectrum disorder (ASD).

    PubMed

    Heavner, Karyn; Newschaffer, Craig; Hertz-Picciotto, Irva; Bennett, Deborah; Burstyn, Igor

    2014-05-01

    The Early Autism Risk Longitudinal Investigation (EARLI), an ongoing study of a risk-enriched pregnancy cohort, examines genetic and environmental risk factors for autism spectrum disorders (ASDs). We simulated the potential effects of both measurement error (ME) in exposures and misclassification of ASD-related phenotype (assessed as Autism Observation Scale for Infants (AOSI) scores) on measures of association generated under this study design. We investigated the impact on the power to detect true associations with exposure and the false positive rate (FPR) for a non-causal correlate of exposure (X2, r=0.7) for continuous AOSI score (linear model) versus dichotomised AOSI (logistic regression) when the sample size (n), degree of ME in exposure, and strength of the expected (true) OR (eOR)) between exposure and AOSI varied. Exposure was a continuous variable in all linear models and dichotomised at one SD above the mean in logistic models. Simulations reveal complex patterns and suggest that: (1) There was attenuation of associations that increased with eOR and ME; (2) The FPR was considerable under many scenarios; and (3) The FPR has a complex dependence on the eOR, ME and model choice, but was greater for logistic models. The findings will stimulate work examining cost-effective strategies to reduce the impact of ME in realistic sample sizes and affirm the importance for EARLI of investment in biological samples that help precisely quantify a wide range of environmental exposures.

  2. PREDICTION OF SOLAR FLARE SIZE AND TIME-TO-FLARE USING SUPPORT VECTOR MACHINE REGRESSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boucheron, Laura E.; Al-Ghraibah, Amani; McAteer, R. T. James

    We study the prediction of solar flare size and time-to-flare using 38 features describing magnetic complexity of the photospheric magnetic field. This work uses support vector regression to formulate a mapping from the 38-dimensional feature space to a continuous-valued label vector representing flare size or time-to-flare. When we consider flaring regions only, we find an average error in estimating flare size of approximately half a geostationary operational environmental satellite (GOES) class. When we additionally consider non-flaring regions, we find an increased average error of approximately three-fourths a GOES class. We also consider thresholding the regressed flare size for the experimentmore » containing both flaring and non-flaring regions and find a true positive rate of 0.69 and a true negative rate of 0.86 for flare prediction. The results for both of these size regression experiments are consistent across a wide range of predictive time windows, indicating that the magnetic complexity features may be persistent in appearance long before flare activity. This is supported by our larger error rates of some 40 hr in the time-to-flare regression problem. The 38 magnetic complexity features considered here appear to have discriminative potential for flare size, but their persistence in time makes them less discriminative for the time-to-flare problem.« less

  3. The effectiveness of foot reflexology in inducing ovulation: a sham-controlled randomized trial.

    PubMed

    Holt, Jane; Lord, Jonathan; Acharya, Umesh; White, Adrian; O'Neill, Nyree; Shaw, Steve; Barton, Andy

    2009-06-01

    To determine whether foot reflexology, a complementary therapy, has an effect greater than sham reflexology on induction of ovulation. Sham-controlled randomized trial with patients and statistician blinded. Infertility clinic in Plymouth, United Kingdom. Forty-eight women attending the clinic with anovulation. Women were randomized to receive eight sessions of either genuine foot reflexology or sham reflexology with gentle massage over 10 weeks. The primary outcome was ovulation detected by serum progesterone level of >30 nmol/L during the study period. Twenty-six patients were randomized to genuine reflexology and 22 to sham (one randomized patient was withdrawn). Patients remained blinded throughout the trial. The rate of ovulation during true reflexology was 11 out of 26 (42%), and during sham reflexology it was 10 out of 22 (46%). Pregnancy rates were 4 out of 26 in the true group and 2 out of 22 in the control group. Because of recruitment difficulties, the required sample size of 104 women was not achieved. Patient blinding of reflexology studies is feasible. Although this study was too small to reach a definitive conclusion on the specific effect of foot reflexology, the results suggest that any effect on ovulation would not be clinically relevant. Sham reflexology may have a beneficial general effect, which this study was not designed to detect.

  4. How to design the cost-effectiveness appraisal process of new healthcare technologies to maximise population health: A conceptual framework.

    PubMed

    Johannesen, Kasper M; Claxton, Karl; Sculpher, Mark J; Wailoo, Allan J

    2018-02-01

    This paper presents a conceptual framework to analyse the design of the cost-effectiveness appraisal process of new healthcare technologies. The framework characterises the appraisal processes as a diagnostic test aimed at identifying cost-effective (true positive) and non-cost-effective (true negative) technologies. Using the framework, factors that influence the value of operating an appraisal process, in terms of net gain to population health, are identified. The framework is used to gain insight into current policy questions including (a) how rigorous the process should be, (b) who should have the burden of proof, and (c) how optimal design changes when allowing for appeals, price reductions, resubmissions, and re-evaluations. The paper demonstrates that there is no one optimal appraisal process and the process should be adapted over time and to the specific technology under assessment. Optimal design depends on country-specific features of (future) technologies, for example, effect, price, and size of the patient population, which might explain the difference in appraisal processes across countries. It is shown that burden of proof should be placed on the producers and that the impact of price reductions and patient access schemes on the producer's price setting should be considered when designing the appraisal process. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Suomi NPP VIIRS solar diffuser screen transmittance model and its applications.

    PubMed

    Lei, Ning; Xiong, Xiaoxiong; Mcintire, Jeff

    2017-11-01

    The visible infrared imaging radiometer suite on the Suomi National Polar-orbiting Partnership satellite calibrates its reflective solar bands through observations of a sunlit solar diffuser (SD) panel. Sunlight passes through a perforated plate, referred to as the SD screen, before reaching the SD. It is critical to know whether the SD screen transmittance measured prelaunch is accurate. Several factors such as misalignments of the SD panel and the measurement apparatus could lead to errors in the measured transmittance and thus adversely impact on-orbit calibration quality through the SD. We develop a mathematical model to describe the transmittance as a function of the angles that incident light makes with the SD screen, and apply the model to fit the prelaunch measured transmittance. The results reveal that the model does not reproduce the measured transmittance unless the size of the apertures in the SD screen is quite different from the design value. We attribute the difference to the orientation alignment errors for the SD panel and the measurement apparatus. We model the alignment errors and apply our transmittance model to fit the prelaunch transmittance to retrieve the "true" transmittance. To use this model correctly, we also examine the finite source size effect on the transmittance. Furthermore, we compare the product of the retrieved "true" transmittance and the prelaunch SD bidirectional reflectance distribution function (BRDF) value to the value derived from on-orbit data to determine whether the prelaunch SD BRDF value is relatively accurate. The model is significant in that it can evaluate whether the SD screen transmittance measured prelaunch is accurate and help retrieve the true transmittance from the transmittance with measurement errors, consequently resulting in a more accurate sensor data product by the same amount.

  6. Blinded and unblinded internal pilot study designs for clinical trials with count data.

    PubMed

    Schneider, Simon; Schmidli, Heinz; Friede, Tim

    2013-07-01

    Internal pilot studies are a popular design feature to address uncertainties in the sample size calculations caused by vague information on nuisance parameters. Despite their popularity, only very recently blinded sample size reestimation procedures for trials with count data were proposed and their properties systematically investigated. Although blinded procedures are favored by regulatory authorities, practical application is somewhat limited by fears that blinded procedures are prone to bias if the treatment effect was misspecified in the planning. Here, we compare unblinded and blinded procedures with respect to bias, error rates, and sample size distribution. We find that both procedures maintain the desired power and that the unblinded procedure is slightly liberal whereas the actual significance level of the blinded procedure is close to the nominal level. Furthermore, we show that in situations where uncertainty about the assumed treatment effect exists, the blinded estimator of the control event rate is biased in contrast to the unblinded estimator, which results in differences in mean sample sizes in favor of the unblinded procedure. However, these differences are rather small compared to the deviations of the mean sample sizes from the sample size required to detect the true, but unknown effect. We demonstrate that the variation of the sample size resulting from the blinded procedure is in many practically relevant situations considerably smaller than the one of the unblinded procedures. The methods are extended to overdispersed counts using a quasi-likelihood approach and are illustrated by trials in relapsing multiple sclerosis. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. What Should Researchers Expect When They Replicate Studies? A Statistical View of Replicability in Psychological Science.

    PubMed

    Patil, Prasad; Peng, Roger D; Leek, Jeffrey T

    2016-07-01

    A recent study of the replicability of key psychological findings is a major contribution toward understanding the human side of the scientific process. Despite the careful and nuanced analysis reported, the simple narrative disseminated by the mass, social, and scientific media was that in only 36% of the studies were the original results replicated. In the current study, however, we showed that 77% of the replication effect sizes reported were within a 95% prediction interval calculated using the original effect size. Our analysis suggests two critical issues in understanding replication of psychological studies. First, researchers' intuitive expectations for what a replication should show do not always match with statistical estimates of replication. Second, when the results of original studies are very imprecise, they create wide prediction intervals-and a broad range of replication effects that are consistent with the original estimates. This may lead to effects that replicate successfully, in that replication results are consistent with statistical expectations, but do not provide much information about the size (or existence) of the true effect. In this light, the results of the Reproducibility Project: Psychology can be viewed as statistically consistent with what one might expect when performing a large-scale replication experiment. © The Author(s) 2016.

  8. Automated nodule location and size estimation using a multi-scale Laplacian of Gaussian filtering approach.

    PubMed

    Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I

    2009-01-01

    Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.

  9. Failure Characteristics of Granite Influenced by Sample Height-to-Width Ratios and Intermediate Principal Stress Under True-Triaxial Unloading Conditions

    NASA Astrophysics Data System (ADS)

    Li, Xibing; Feng, Fan; Li, Diyuan; Du, Kun; Ranjith, P. G.; Rostami, Jamal

    2018-05-01

    The failure modes and peak unloading strength of a typical hard rock, Miluo granite, with particular attention to the sample height-to-width ratio (between 2 and 0.5), and the intermediate principal stress was investigated using a true-triaxial test system. The experimental results indicate that both sample height-to-width ratios and intermediate principal stress have an impact on the failure modes, peak strength and severity of rockburst in hard rock under true-triaxial unloading conditions. For longer rectangular specimens, the transition of failure mode from shear to slabbing requires higher intermediate principal stress. With the decrease in sample height-to-width ratios, slabbing failure is more likely to occur under the condition of lower intermediate principal stress. For same intermediate principal stress, the peak unloading strength monotonically increases with the decrease in sample height-to-width. However, the peak unloading strength as functions of intermediate principal stress for different types of rock samples (with sample height-to-width ratio of 2, 1 and 0.5) all present the pattern of initial increase, followed by a subsequent decrease. The curves fitted to octahedral shear stress as a function of mean effective stress also validate the applicability of the Mogi-Coulomb failure criterion for all considered rock sizes under true-triaxial unloading conditions, and the corresponding cohesion C and internal friction angle φ are calculated. The severity of strainburst of granite depends on the sample height-to-width ratios and intermediate principal stress. Therefore, different supporting strategies are recommended in deep tunneling projects and mining activities. Moreover, the comparison of test results of different σ 2/ σ 3 also reveals the little influence of minimum principal stress on failure characteristics of granite during the true-triaxial unloading process.

  10. 36 CFR § 1281.16 - What standard does NARA use for measuring building size?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true What standard does NARA use for measuring building size? § 1281.16 Section § 1281.16 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION NARA FACILITIES PRESIDENTIAL LIBRARY FACILITIES § 1281.16 What...

  11. 40 CFR 113.4 - Size classes and associated liability limits for fixed onshore oil storage facilities, 1,000...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 22 2014-07-01 2013-07-01 true Size classes and associated liability limits for fixed onshore oil storage facilities, 1,000 barrels or less capacity. 113.4 Section 113.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS LIABILITY LIMITS FOR...

  12. A qubit coupled with confined phonons: The interplay between true and fake decoherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pouthier, Vincent

    2013-08-07

    The decoherence of a qubit coupled with the phonons of a finite-size lattice is investigated. The confined phonons no longer behave as a reservoir. They remain sensitive to the qubit so that the origin of the decoherence is twofold. First, a qubit-phonon entanglement yields an incomplete true decoherence. Second, the qubit renormalizes the phonon frequency resulting in fake decoherence when a thermal average is performed. To account for the initial thermalization of the lattice, the qua- ntum Langevin theory is applied so that the phonons are viewed as an open system coupled with a thermal bath of harmonic oscillators. Consequently,more » it is shown that the finite lifetime of the phonons does not modify fake decoherence but strongly affects true decoherence. Depending on the values of the model parameters, the interplay between fake and true decoherence yields a very rich dynamics with various regimes.« less

  13. Accuracy for detection of simulated lesions: comparison of fluid-attenuated inversion-recovery, proton density--weighted, and T2-weighted synthetic brain MR imaging

    NASA Technical Reports Server (NTRS)

    Herskovits, E. H.; Itoh, R.; Melhem, E. R.

    2001-01-01

    OBJECTIVE: The objective of our study was to determine the effects of MR sequence (fluid-attenuated inversion-recovery [FLAIR], proton density--weighted, and T2-weighted) and of lesion location on sensitivity and specificity of lesion detection. MATERIALS AND METHODS: We generated FLAIR, proton density-weighted, and T2-weighted brain images with 3-mm lesions using published parameters for acute multiple sclerosis plaques. Each image contained from zero to five lesions that were distributed among cortical-subcortical, periventricular, and deep white matter regions; on either side; and anterior or posterior in position. We presented images of 540 lesions, distributed among 2592 image regions, to six neuroradiologists. We constructed a contingency table for image regions with lesions and another for image regions without lesions (normal). Each table included the following: the reviewer's number (1--6); the MR sequence; the side, position, and region of the lesion; and the reviewer's response (lesion present or absent [normal]). We performed chi-square and log-linear analyses. RESULTS: The FLAIR sequence yielded the highest true-positive rates (p < 0.001) and the highest true-negative rates (p < 0.001). Regions also differed in reviewers' true-positive rates (p < 0.001) and true-negative rates (p = 0.002). The true-positive rate model generated by log-linear analysis contained an additional sequence-location interaction. The true-negative rate model generated by log-linear analysis confirmed these associations, but no higher order interactions were added. CONCLUSION: We developed software with which we can generate brain images of a wide range of pulse sequences and that allows us to specify the location, size, shape, and intrinsic characteristics of simulated lesions. We found that the use of FLAIR sequences increases detection accuracy for cortical-subcortical and periventricular lesions over that associated with proton density- and T2-weighted sequences.

  14. Environmental corrections of a dual-induction logging while drilling tool in vertical wells

    NASA Astrophysics Data System (ADS)

    Kang, Zhengming; Ke, Shizhen; Jiang, Ming; Yin, Chengfang; Li, Anzong; Li, Junjian

    2018-04-01

    With the development of Logging While Drilling (LWD) technology, dual-induction LWD logging is not only widely applied in deviated wells and horizontal wells, but it is used commonly in vertical wells. Accordingly, it is necessary to simulate the response of LWD tools in vertical wells for logging interpretation. In this paper, the investigation characteristics, the effects of the tool structure, skin effect and drilling environment of a dual-induction LWD tool are simulated by the three-dimensional (3D) finite element method (FEM). In order to closely simulate the actual situation, real structure of the tool is taking into account. The results demonstrate that the influence of the background value of the tool structure can be eliminated. The values of deducting the background of a tool structure and analytical solution have a quantitative agreement in homogeneous formations. The effect of measurement frequency could be effectively eliminated by chart of skin effect correction. In addition, the measurement environment, borehole size, mud resistivity, shoulder bed, layer thickness and invasion, have an effect on the true resistivity. To eliminate these effects, borehole correction charts, shoulder bed correction charts and tornado charts are computed based on real tool structure. Based on correction charts, well logging data can be corrected automatically by a suitable interpolation method, which is convenient and fast. Verified with actual logging data in vertical wells, this method could obtain the true resistivity of formation.

  15. A simple method to estimate restoration volume as a possible predictor for tooth fracture.

    PubMed

    Sturdevant, J R; Bader, J D; Shugars, D A; Steet, T C

    2003-08-01

    Many dentists cite the fracture risk posed by a large existing restoration as a primary reason for their decision to place a full-coverage restoration. However, there is poor agreement among dentists as to when restoration placement is necessary because of the inability to make objective measurements of restoration size. The purpose of this study was to compare a new method to estimate restoration volumes in posterior teeth with analytically determined volumes. True restoration volume proportion (RVP) was determined for 96 melamine typodont teeth: 24 each of maxillary second premolar, mandibular second premolar, maxillary first molar, and mandibular first molar. Each group of 24 was subdivided into 3 groups to receive an O, MO, or MOD amalgam preparation design. Each preparation design was further subdivided into 4 groups of increasingly larger size. The density of amalgam used was calculated according to ANSI/ADA Specification 1. The teeth were weighed before and after restoration with amalgam. Restoration weight was calculated, and the density of amalgam was used to calculate restoration volume. A liquid pycnometer was used to calculate coronal volume after sectioning the anatomic crown from the root horizontally at the cementoenamel junction. True RVP was calculated by dividing restoration volume by coronal volume. An occlusal photograph and a bitewing radiograph were made of each restored tooth to provide 2 perpendicular views. Each image was digitized, and software was used to measure the percentage of the anatomic crown restored with amalgam. Estimated RVP was calculated by multiplying the percentage of the anatomic crown restored from the 2 views together. Pearson correlation coefficients were used to compare estimated RVP with true RVP. The Pearson correlation coefficient of true RVP with estimated RVP was 0.97 overall (P

  16. Is there a correlation of sonographic measurements of true vocal cords with gender or body mass indices in normal healthy volunteers?

    PubMed Central

    Bright, Leah; Secko, Michael; Mehta, Ninfa; Paladino, Lorenzo; Sinert, Richard

    2014-01-01

    Background: Ultrasound is a readily available, non-invasive technique to visualize airway dimensions at the patient's bedside and possibly predict difficult airways before invasively looking; however, it has rarely been used for emergency investigation of the larynx. There is limited literature on the sonographic measurements of true vocal cords in adults and normal parameters must be established before abnormal parameters can be accurately identified. Objectives: The primary objective of the following study is to identify the normal sonographic values of human true vocal cords in an adult population. A secondary objective is to determine if there is a difference in true vocal cord measurements in people with different body mass indices (BMIs). The third objective was to determine if there was a statistical difference in the measurements for both genders. Materials and Methods: True vocal cord measurements were obtained in healthy volunteers by ultrasound fellowship trained emergency medicine physicians using a high frequency linear transducer orientated transversely across the anterior surface of the neck at the level of the thyroid cartilage. The width of the true vocal cord was measured perpendicularly to the length of the cord at its mid-portion. This method was duplicated from a previous study to create a standard of measurement acquisition. Results: A total of 38 subjects were enrolled. The study demonstrated no correlation between vocal cord measurements and patient's characteristics of height, weight, or BMI's. When accounting for vocal cord measurements by gender, males had larger BMI's and larger vocal cord measurements compared with females subjects with a statistically significant different in right vocal cord measurements for females compared with male subjects. Conclusion: No correlation was seen between vocal cord measurements and person's BMIs. In the study group of normal volunteers, there was a difference in size between the male and female vocal cord size. PMID:24812456

  17. Is there a correlation of sonographic measurements of true vocal cords with gender or body mass indices in normal healthy volunteers?

    PubMed

    Bright, Leah; Secko, Michael; Mehta, Ninfa; Paladino, Lorenzo; Sinert, Richard

    2014-04-01

    Ultrasound is a readily available, non-invasive technique to visualize airway dimensions at the patient's bedside and possibly predict difficult airways before invasively looking; however, it has rarely been used for emergency investigation of the larynx. There is limited literature on the sonographic measurements of true vocal cords in adults and normal parameters must be established before abnormal parameters can be accurately identified. The primary objective of the following study is to identify the normal sonographic values of human true vocal cords in an adult population. A secondary objective is to determine if there is a difference in true vocal cord measurements in people with different body mass indices (BMIs). The third objective was to determine if there was a statistical difference in the measurements for both genders. True vocal cord measurements were obtained in healthy volunteers by ultrasound fellowship trained emergency medicine physicians using a high frequency linear transducer orientated transversely across the anterior surface of the neck at the level of the thyroid cartilage. The width of the true vocal cord was measured perpendicularly to the length of the cord at its mid-portion. This method was duplicated from a previous study to create a standard of measurement acquisition. A total of 38 subjects were enrolled. The study demonstrated no correlation between vocal cord measurements and patient's characteristics of height, weight, or BMI's. When accounting for vocal cord measurements by gender, males had larger BMI's and larger vocal cord measurements compared with females subjects with a statistically significant different in right vocal cord measurements for females compared with male subjects. No correlation was seen between vocal cord measurements and person's BMIs. In the study group of normal volunteers, there was a difference in size between the male and female vocal cord size.

  18. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.

    PubMed

    Kim, Sehwi; Jung, Inkyung

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.

  19. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data

    PubMed Central

    Kim, Sehwi

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns. PMID:28753674

  20. Discovery of the Largest Orbweaving Spider Species: The Evolution of Gigantism in Nephila

    PubMed Central

    Kuntner, Matjaž; Coddington, Jonathan A.

    2009-01-01

    Background More than 41,000 spider species are known with about 400–500 added each year, but for some well-known groups, such as the giant golden orbweavers, Nephila, the last valid described species dates from the 19th century. Nephila are renowned for being the largest web-spinning spiders, making the largest orb webs, and are model organisms for the study of extreme sexual size dimorphism (SSD) and sexual biology. Here, we report on the discovery of a new, giant Nephila species from Africa and Madagascar, and review size evolution and SSD in Nephilidae. Methodology We formally describe N. komaci sp. nov., the largest web spinning species known, and place the species in phylogenetic context to reconstruct the evolution of mean size (via squared change parsimony). We then test female and male mean size correlation using phylogenetically independent contrasts, and simulate nephilid body size evolution using Monte Carlo statistics. Conclusions Nephila females increased in size almost monotonically to establish a mostly African clade of true giants. In contrast, Nephila male size is effectively decoupled and hovers around values roughly one fifth of female size. Although N. komaci females are the largest Nephila yet discovered, the males are also large and thus their SSD is not exceptional. PMID:19844575

  1. Discovery of the largest orbweaving spider species: the evolution of gigantism in Nephila.

    PubMed

    Kuntner, Matjaz; Coddington, Jonathan A

    2009-10-21

    More than 41,000 spider species are known with about 400-500 added each year, but for some well-known groups, such as the giant golden orbweavers, Nephila, the last valid described species dates from the 19(th) century. Nephila are renowned for being the largest web-spinning spiders, making the largest orb webs, and are model organisms for the study of extreme sexual size dimorphism (SSD) and sexual biology. Here, we report on the discovery of a new, giant Nephila species from Africa and Madagascar, and review size evolution and SSD in Nephilidae. We formally describe N. komaci sp. nov., the largest web spinning species known, and place the species in phylogenetic context to reconstruct the evolution of mean size (via squared change parsimony). We then test female and male mean size correlation using phylogenetically independent contrasts, and simulate nephilid body size evolution using Monte Carlo statistics. Nephila females increased in size almost monotonically to establish a mostly African clade of true giants. In contrast, Nephila male size is effectively decoupled and hovers around values roughly one fifth of female size. Although N. komaci females are the largest Nephila yet discovered, the males are also large and thus their SSD is not exceptional.

  2. Why size matters: differences in brain volume account for apparent sex differences in callosal anatomy: the sexual dimorphism of the corpus callosum.

    PubMed

    Luders, Eileen; Toga, Arthur W; Thompson, Paul M

    2014-01-01

    Numerous studies have demonstrated a sexual dimorphism of the human corpus callosum. However, the question remains if sex differences in brain size, which typically is larger in men than in women, or biological sex per se account for the apparent sex differences in callosal morphology. Comparing callosal dimensions between men and women matched for overall brain size may clarify the true contribution of biological sex, as any observed group difference should indicate pure sex effects. We thus examined callosal morphology in 24 male and 24 female brains carefully matched for overall size. In addition, we selected 24 extremely large male brains and 24 extremely small female brains to explore if observed sex effects might vary depending on the degree to which male and female groups differed in brain size. Using the individual T1-weighted brain images (n=96), we delineated the corpus callosum at midline and applied a well-validated surface-based mesh-modeling approach to compare callosal thickness at 100 equidistant points between groups determined by brain size and sex. The corpus callosum was always thicker in men than in women. However, this callosal sex difference was strongly determined by the cerebral sex difference overall. That is, the larger the discrepancy in brain size between men and women, the more pronounced the sex difference in callosal thickness, with hardly any callosal differences remaining between brain-size matched men and women. Altogether, these findings suggest that individual differences in brain size account for apparent sex differences in the anatomy of the corpus callosum. © 2013.

  3. Why Size Matters: Differences in Brain Volume Account for Apparent Sex Differences in Callosal Anatomy

    PubMed Central

    Luders, Eileen; Toga, Arthur W.; Thompson, Paul M.

    2013-01-01

    Numerous studies have demonstrated a sexual dimorphism of the human corpus callosum. However, the question remains if sex differences in brain size, which typically is larger in men than in women, or biological sex per se account for the apparent sex differences in callosal morphology. Comparing callosal dimensions between men and women matched for overall brain size may clarify the true contribution of biological sex, as any observed group difference should indicate pure sex effects. We thus examined callosal morphology in 24 male and 24 female brains carefully matched for overall size. In addition, we selected 24 extremely large male brains and 24 extremely small female brains to explore if observed sex effects might vary depending on the degree to which male and female groups differed in brain size. Using the individual T1-weighted brain images (n=96), we delineated the corpus callosum at midline and applied a well-validated surface-based mesh-modeling approach to compare callosal thickness at 100 equidistant points between groups determined by brain size and sex. The corpus callosum was always thicker in men than in women. However, this callosal sex difference was strongly determined by the cerebral sex difference overall. That is, the larger the discrepancy in brain size between men and women, the more pronounced the sex difference in callosal thickness, with hardly any callosal differences remaining between brain-size matched men and women. Altogether, these findings suggest that individual differences in brain size account for apparent sex differences in the anatomy of the corpus callosum. PMID:24064068

  4. The deep web, dark matter, metabundles and the broadband elites: do you need an informaticist?

    PubMed

    Holden, Gary; Rosenberg, Gary

    2003-01-01

    The World Wide Web (WWW) is growing in size and is becoming a substantial component of life. This seems especially true for US professionals, including social workers. It will require effort by these professionals to use the WWW effectively and efficiently. One of the main issues that these professionals will encounter in these efforts is the quality of materials located on the WWW. This paper reviews some of the factors related to improving the quality of information obtained from the WWW by social workers.

  5. Precision, Reliability, and Effect Size of Slope Variance in Latent Growth Curve Models: Implications for Statistical Power Analysis

    PubMed Central

    Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher

    2018-01-01

    Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377

  6. Intraspecific variation in egg size and egg composition in birds: effects on offspring fitness.

    PubMed

    Williams, T D

    1994-02-01

    1. There is little unequivocal evidence to date in support of a positive relationship between egg size and offspring fitness in birds. Although 40 studies (of 34 species) have considered the effect of variation in egg size on chick growth and/or survival up to fledgling only 12 studies have controlled for other characters potentially correlated both with egg size and offspring fitness. Of these only two have reported a significant residual effect of egg size on chick growth (in the roseate tern and European blackbird) and three a residual effect on chick survival (all in seabirds: common tern, lesser black-backed gull and kittiwake). 2. More consistent evidence exists, though from fewer studies, for a positive relationship between egg size and offspring fitness early in the chick-rearing period; chick growth and chick survival being dependent on egg size in 8 of 10 studies and 4 of 5 studies respectively. It is suggested that the most important effect of variation in egg size might be in determining the probability of offspring survival in the first few days after hatching. 3. Egg size explains on average 66% of the variation in chick mass at hatching (n = 35 studies) but only 30% of the variation in chick body size (n = 18). When effects of hatching body size are controlled for chick mass remains significantly correlated with egg size, though the reverse is not true. This supports the hypothesis that large eggs give rise to heavier chicks at hatching, i.e., chicks with more nutrient (yolk) reserves, rather than structurally larger chicks. 4. Egg composition increased isometrically with increasing egg size in about half the studies so far reported (n equals approximately 20). However, in seabirds, and some passerines, larger eggs contain disproportionately more albumen, whilst in some waterfowl percentage yolk content increases with increasing egg size. Changes in albumen content largely reflect variation in the water content of eggs, but changes in yolk content involve variation in lipid content, and therefore in egg 'quality.' The adaptive significance of variation in egg composition is considered; females may adjust egg composition facultatively to maximise the benefits to their offspring of increased reproductive investment. 5. Considerations for future research are discussed with particular emphasis on experimental studies and the application of new techniques.

  7. Biased phylodynamic inferences from analysing clusters of viral sequences

    PubMed Central

    Xiang, Fei; Frost, Simon D. W.

    2017-01-01

    Abstract Phylogenetic methods are being increasingly used to help understand the transmission dynamics of measurably evolving viruses, including HIV. Clusters of highly similar sequences are often observed, which appear to follow a ‘power law’ behaviour, with a small number of very large clusters. These clusters may help to identify subpopulations in an epidemic, and inform where intervention strategies should be implemented. However, clustering of samples does not necessarily imply the presence of a subpopulation with high transmission rates, as groups of closely related viruses can also occur due to non-epidemiological effects such as over-sampling. It is important to ensure that observed phylogenetic clustering reflects true heterogeneity in the transmitting population, and is not being driven by non-epidemiological effects. We qualify the effect of using a falsely identified ‘transmission cluster’ of sequences to estimate phylodynamic parameters including the effective population size and exponential growth rate under several demographic scenarios. Our simulation studies show that taking the maximum size cluster to re-estimate parameters from trees simulated under a randomly mixing, constant population size coalescent process systematically underestimates the overall effective population size. In addition, the transmission cluster wrongly resembles an exponential or logistic growth model 99% of the time. We also illustrate the consequences of false clusters in exponentially growing coalescent and birth-death trees, where again, the growth rate is skewed upwards. This has clear implications for identifying clusters in large viral databases, where a false cluster could result in wasted intervention resources. PMID:28852573

  8. Measurement of true ileal calcium digestibility in meat and bone meal for broiler chickens using the direct method.

    PubMed

    Anwar, M N; Ravindran, V; Morel, P C H; Ravindran, G; Cowieson, A J

    2016-01-01

    The objective of the study that is presented herein was to determine the true ileal calcium (Ca) digestibility in meat and bone meal (MBM) for broiler chickens using the direct method. Four MBM samples (coded as MBM-1, MBM-2, MBM-3 and MBM-4) were obtained and analyzed for nutrient composition, particle size distribution and bone to soft tissue ratio. The Ca concentrations of MBM-1, MBM-2, MBM-3 and MBM-4 were determined to be 71, 118, 114 and 81 g/kg, respectively. The corresponding geometric mean particle diameters and bone to soft tissue ratios were 0.866, 0.622, 0.875 and 0.781 mm, and 1:1.49, 1:0.98, 1:0.92 and 1:1.35, respectively. Five experimental diets, including four diets with similar Ca concentration (8.3 g/kg) from each MBM and a Ca and phosphorus-free diet, were developed. Meat and bone meal served as the sole source of Ca in the MBM diets. Titanium dioxide (3 g/kg) was incorporated in all diets as an indigestible marker. Each experimental diet was randomly allotted to six replicate cages (eight birds per cage) and offered from d 28 to 31 post-hatch. Apparent ileal Ca digestibility was calculated by the indicator method and corrected for ileal endogenous Ca losses to determine the true ileal Ca digestibility. Ileal endogenous Ca losses were determined to be 88 mg/kg dry matter intake. True ileal Ca digestibility coefficients of MBM-1, MBM-2, MBM-3 and MBM-4 were determined to be 0.560, 0.446, 0.517 and 0.413, respectively. True Ca digestibility of MBM-1 was higher (P < 0.05) than MBM-2 and MBM-4 but similar (P > 0.05) to that of MBM-3. True Ca digestibility of MBM-2 was similar (P > 0.05) to MBM-3 and MBM-4, while that of MBM-3 was higher (P < 0.05) than MBM-4. These results demonstrated that the direct method can be used for the determination of true Ca digestibility in feed ingredients and that Ca in MBM is not highly available as often assumed. The variability in true Ca digestibility of MBM samples could not be attributed to Ca content, percentage bones or particle size. © 2015 Poultry Science Association Inc.

  9. The earth is flat (p > 0.05): significance thresholds and the crisis of unreplicable research.

    PubMed

    Amrhein, Valentin; Korner-Nievergelt, Fränzi; Roth, Tobias

    2017-01-01

    The widespread use of 'statistical significance' as a license for making a claim of a scientific finding leads to considerable distortion of the scientific process (according to the American Statistical Association). We review why degrading p -values into 'significant' and 'nonsignificant' contributes to making studies irreproducible, or to making them seem irreproducible. A major problem is that we tend to take small p -values at face value, but mistrust results with larger p -values. In either case, p -values tell little about reliability of research, because they are hardly replicable even if an alternative hypothesis is true. Also significance ( p  ≤ 0.05) is hardly replicable: at a good statistical power of 80%, two studies will be 'conflicting', meaning that one is significant and the other is not, in one third of the cases if there is a true effect. A replication can therefore not be interpreted as having failed only because it is nonsignificant. Many apparent replication failures may thus reflect faulty judgment based on significance thresholds rather than a crisis of unreplicable research. Reliable conclusions on replicability and practical importance of a finding can only be drawn using cumulative evidence from multiple independent studies. However, applying significance thresholds makes cumulative knowledge unreliable. One reason is that with anything but ideal statistical power, significant effect sizes will be biased upwards. Interpreting inflated significant results while ignoring nonsignificant results will thus lead to wrong conclusions. But current incentives to hunt for significance lead to selective reporting and to publication bias against nonsignificant findings. Data dredging, p -hacking, and publication bias should be addressed by removing fixed significance thresholds. Consistent with the recommendations of the late Ronald Fisher, p -values should be interpreted as graded measures of the strength of evidence against the null hypothesis. Also larger p -values offer some evidence against the null hypothesis, and they cannot be interpreted as supporting the null hypothesis, falsely concluding that 'there is no effect'. Information on possible true effect sizes that are compatible with the data must be obtained from the point estimate, e.g., from a sample average, and from the interval estimate, such as a confidence interval. We review how confusion about interpretation of larger p -values can be traced back to historical disputes among the founders of modern statistics. We further discuss potential arguments against removing significance thresholds, for example that decision rules should rather be more stringent, that sample sizes could decrease, or that p -values should better be completely abandoned. We conclude that whatever method of statistical inference we use, dichotomous threshold thinking must give way to non-automated informed judgment.

  10. Multi-passes warm rolling of AZ31 magnesium alloy, effect on evaluation of texture, microstructure, grain size and hardness

    NASA Astrophysics Data System (ADS)

    Kamran, J.; Hasan, B. A.; Tariq, N. H.; Izhar, S.; Sarwar, M.

    2014-06-01

    In this study the effect of multi-passes warm rolling of AZ31 magnesium alloy on texture, microstructure, grain size variation and hardness of as cast sample (A) and two rolled samples (B & C) taken from different locations of the as-cast ingot was investigated. The purpose was to enhance the formability of AZ31 alloy in order to help manufacturability. It was observed that multi-passes warm rolling (250°C to 350°C) of samples B & C with initial thickness 7.76mm and 7.73 mm was successfully achieved up to 85% reduction without any edge or surface cracks in ten steps with a total of 26 passes. The step numbers 1 to 4 consist of 5, 2, 11 and 3 passes respectively, the remaining steps 5 to 10 were single pass rolls. In each discrete step a fixed roll gap is used in a way that true strain per step increases very slowly from 0.0067 in the first step to 0.7118 in the 26th step. Both samples B & C showed very similar behavior after 26th pass and were successfully rolled up to 85% thickness reduction. However, during 10th step (27th pass) with a true strain value of 0.772 the sample B experienced very severe surface as well as edge cracks. Sample C was therefore not rolled for the 10th step and retained after 26 passes. Both samples were studied in terms of their basal texture, microstructure, grain size and hardness. Sample C showed an equiaxed grain structure after 85% total reduction. The equiaxed grain structure of sample C may be due to the effective involvement of dynamic recrystallization (DRX) which led to formation of these grains with relatively low misorientations with respect to the parent as cast grains. The sample B on the other hand showed a microstructure in which all the grains were elongated along the rolling direction (RD) after 90 % total reduction and DRX could not effectively play its role due to heavy strain and lack of plastic deformation systems. The microstructure of as cast sample showed a near-random texture (mrd 4.3), with average grain size of 44 & micro-hardness of 52 Hv. The grain size of sample B and C was 14μm and 27μm respectively and mrd intensity of basal texture was 5.34 and 5.46 respectively. The hardness of sample B and C came out to be 91 and 66 Hv respectively due to reduction in grain size and followed the well known Hall-Petch relationship.

  11. Human allometry: adult bodies are more nearly geometrically similar than regression analysis has suggested.

    PubMed

    Burton, Richard F

    2010-01-01

    It is almost a matter of dogma that human body mass in adults tends to vary roughly in proportion to the square of height (stature), as Quetelet stated in 1835. As he realised, perfect isometry or geometric similarity requires that body mass varies with height cubed, so there seems to be a trend for tall adults to be relatively much lighter than short ones. Much evidence regarding component tissues and organs seems to accord with this idea. However, the hypothesis is presented that the proportions of the body are actually very much less size-dependent. Past evidence has mostly been obtained by least-squares regression analysis, but this cannot generally give a true picture of the allometric relationships. This is because there is considerable scatter in the data (leading to a low correlation between mass and height) and because neither variable causally determines the other. The relevant regression equations, though often formulated in logarithmic terms, effectively treat the masses as proportional to (body height)(b). Values of b estimated by regression must usually underestimate the true functional values, doing so especially when mass and height are poorly correlated. It is therefore telling support for the hypothesis that published estimates of b both for the whole body (which range between 1.0 and 2.5) and for its component tissues and organs (which vary even more) correlate with the corresponding correlation coefficients for mass and height. There is no simple statistical technique for establishing the true functional relationships, but Monte Carlo modelling has shown that the results obtained for total body mass are compatible with a true height exponent of three. Other data, on relationships between body mass and the girths of various body parts such as the thigh and chest, are also more consistent with isometry than regression analysis has suggested. This too is demonstrated by modelling. It thus seems that much of anthropometry needs to be re-evaluated. It is not suggested that all organs and tissues scale equally with whole body size.

  12. Methodological characteristics and treatment effect sizes in oral health randomised controlled trials: Is there a relationship? Protocol for a meta-epidemiological study.

    PubMed

    Saltaji, Humam; Armijo-Olivo, Susan; Cummings, Greta G; Amin, Maryam; Flores-Mir, Carlos

    2014-02-25

    It is fundamental that randomised controlled trials (RCTs) are properly conducted in order to reach well-supported conclusions. However, there is emerging evidence that RCTs are subject to biases which can overestimate or underestimate the true treatment effect, due to flaws in the study design characteristics of such trials. The extent to which this holds true in oral health RCTs, which have some unique design characteristics compared to RCTs in other health fields, is unclear. As such, we aim to examine the empirical evidence quantifying the extent of bias associated with methodological and non-methodological characteristics in oral health RCTs. We plan to perform a meta-epidemiological study, where a sample size of 60 meta-analyses (MAs) including approximately 600 RCTs will be selected. The MAs will be randomly obtained from the Oral Health Database of Systematic Reviews using a random number table; and will be considered for inclusion if they include a minimum of five RCTs, and examine a therapeutic intervention related to one of the recognised dental specialties. RCTs identified in selected MAs will be subsequently included if their study design includes a comparison between an intervention group and a placebo group or another intervention group. Data will be extracted from selected trials included in MAs based on a number of methodological and non-methodological characteristics. Moreover, the risk of bias will be assessed using the Cochrane Risk of Bias tool. Effect size estimates and measures of variability for the main outcome will be extracted from each RCT included in selected MAs, and a two-level analysis will be conducted using a meta-meta-analytic approach with a random effects model to allow for intra-MA and inter-MA heterogeneity. The intended audiences of the findings will include dental clinicians, oral health researchers, policymakers and graduate students. The aforementioned will be introduced to the findings through workshops, seminars, round table discussions and targeted individual meetings. Other opportunities for knowledge transfer will be pursued such as key dental conferences. Finally, the results will be published as a scientific report in a dental peer-reviewed journal.

  13. Micromechanical properties of single crystals and polycrystals of pure α-titanium: anisotropy of microhardness, size effect, effect of the temperature (77-300 K)

    NASA Astrophysics Data System (ADS)

    Lubenets, S. V.; Rusakova, A. V.; Fomenko, L. S.; Moskalenko, V. A.

    2018-01-01

    The anisotropy of microhardness of pure α-Ti single crystals, indentation size effect in single-crystal, course grained (CG) pure and nanocrystalline (NC) VT1-0 titanium, as well as the temperature dependences of the microhardness of single-crystal and CG Ti in the temperature range 77-300 K were studied. The minimum value of hardness was obtained when indenting into the basal plane (0001). The indentation size effect (ISE) was clearly observed in the indentation of soft high-purity single-crystal iodide titanium while it was the least pronounced in a sample of nanocrystalline VT1-0 titanium. It has been demonstrated that the ISE can be described within the model of geometrically necessary dislocations (GND), which follows from the theory of strain gradient plasticity. The true hardness and others parameters of the GND model were determined for all materials. The temperature dependence of the microhardness is in agreement with the idea of the governing role of Peierls relief in the dislocation thermally-activated plastic deformation of pure titanium as has been earlier established and justified in macroscopic tensile investigations at low temperatures. The activation energy and activation volume of dislocation motion in the strained region under the indenter were estimated.

  14. Reference interval computation: which method (not) to choose?

    PubMed

    Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C

    2012-07-11

    When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Comparison of structural and least-squares lines for estimating geologic relations

    USGS Publications Warehouse

    Williams, G.P.; Troutman, B.M.

    1990-01-01

    Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.

  16. Comparison of True and Smoothed Puff Profile Replication on Smoking Behavior and Mainstream Smoke Emissions

    PubMed Central

    2015-01-01

    To estimate exposures to smokers from cigarettes, smoking topography is typically measured and programmed into a smoking machine to mimic human smoking, and the resulting smoke emissions are tested for relative levels of harmful constituents. However, using only the summary puff data—with a fixed puff frequency, volume, and duration—may underestimate or overestimate actual exposure to smoke toxins. In this laboratory study, we used a topography-driven smoking machine that faithfully reproduces a human smoking session and individual human topography data (n = 24) collected during previous clinical research to investigate if replicating the true puff profile (TP) versus the mathematically derived smoothed puff profile (SM) resulted in differences in particle size distributions and selected toxic/carcinogenic organic compounds from mainstream smoke emissions. Particle size distributions were measured using an electrical low pressure impactor, the masses of the size-fractionated fine and ultrafine particles were determined gravimetrically, and the collected particulate was analyzed for selected particle-bound, semivolatile compounds. Volatile compounds were measured in real time using a proton transfer reaction-mass spectrometer. By and large, TP levels for the fine and ultrafine particulate masses as well as particle-bound organic compounds were slightly lower than the SM concentrations. The volatile compounds, by contrast, showed no clear trend. Differences in emissions due to the use of the TP and SM profiles are generally not large enough to warrant abandoning the procedures used to generate the simpler smoothed profile in favor of the true profile. PMID:25536227

  17. Conspicuity of renal calculi at unenhanced CT: effects of calculus composition and size and CT technique.

    PubMed

    Tublin, Mitchell E; Murphy, Michael E; Delong, David M; Tessler, Franklin N; Kliewer, Mark A

    2002-10-01

    To determine the effects of calculus size, composition, and technique (kilovolt and milliampere settings) on the conspicuity of renal calculi at unenhanced helical computed tomography (CT). The authors performed unenhanced CT of a phantom containing 188 renal calculi of varying size and chemical composition (brushite, cystine, struvite, weddellite, whewellite, and uric acid) at 24 combinations of four kilovolt (80-140 kV) and six milliampere (200-300 mA) levels. Two radiologists, who were unaware of the location and number of calculi, reviewed the CT images and recorded where stones were detected. These observations were compared with the known positions of calculi to generate true-positive and false-positive rates. Logistic regression analysis was performed to investigate the effects of stone size, composition, and technique and to generate probability estimates of detection. Interobserver agreement was estimated with kappa statistics. Interobserver agreement was high: the mean kappa value for the two observers was 0.86. The conspicuity of stone fragments increased with increasing kilovolt and milliampere levels for all stone types. At the highest settings (140 kV and 300 mA), the detection threshold size (ie, the size of calculus that had a 50% probability of being detected) ranged from 0.81 mm + 0.03 (weddellite) to 1.3 mm + 0.1 (uric acid). Detection threshold size for each type of calculus increased up to 1.17-fold at lower kilovolt settings and up to 1.08-fold at lower milliampere settings. The conspicuity of small renal calculi at CT increases with higher kilovolt and milliampere settings, with higher kilovolts being particularly important. Small uric acid calculi may be imperceptible, even with maximal CT technique.

  18. The measurement of the size distribution of artificial fogs

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Cliff, W. C.; Mcdonald, J. R.; Ozarski, R.; Thomson, J. A. L.; Huffaker, R. M.

    1974-01-01

    The size-distribution of the fog droplets at various fog particle concentrations in fog chamber was determined by two methods: (1) the Stokes' velocity photographic method and (2) using the active scattering particle spectrometer. It is shown that the two techniques are accurate in two different ranges of particle size - the former in the radii range (0.1 micrometers to 10.0 micrometers), and the latter for radii greater than 10.0 micrometers. This was particularly true for high particle concentration, low visibility fogs.

  19. Assessing and monitoring semi-arid shrublands using object-based image analysis and multiple endmember spectral mixture analysis.

    PubMed

    Hamada, Yuki; Stow, Douglas A; Roberts, Dar A; Franklin, Janet; Kyriakidis, Phaedon C

    2013-04-01

    Arid and semi-arid shrublands have significant biological and economical values and have been experiencing dramatic changes due to human activities. In California, California sage scrub (CSS) is one of the most endangered plant communities in the US and requires close monitoring in order to conserve this important biological resource. We investigate the utility of remote-sensing approaches--object-based image analysis applied to pansharpened QuickBird imagery (QBPS/OBIA) and multiple endmember spectral mixture analysis (MESMA) applied to SPOT imagery (SPOT/MESMA)--for estimating fractional cover of true shrub, subshrub, herb, and bare ground within CSS communities of southern California. We also explore the effectiveness of life-form cover maps for assessing CSS conditions. Overall and combined shrub cover (i.e., true shrub and subshrub) were estimated more accurately using QBPS/OBIA (mean absolute error or MAE, 8.9 %) than SPOT/MESMA (MAE, 11.4 %). Life-form cover from QBPS/OBIA at a 25 × 25 m grid cell size seems most desirable for assessing CSS because of its higher accuracy and spatial detail in cover estimates and amenability to extracting other vegetation information (e.g., size, shape, and density of shrub patches). Maps derived from SPOT/MESMA at a 50 × 50 m scale are effective for retrospective analysis of life-form cover change because their comparable accuracies to QBPS/OBIA and availability of SPOT archives data dating back to the mid-1980s. The framework in this study can be applied to other physiognomically comparable shrubland communities.

  20. MR diffusion-weighted imaging-based subcutaneous tumour volumetry in a xenografted nude mouse model using 3D Slicer: an accurate and repeatable method

    PubMed Central

    Ma, Zelan; Chen, Xin; Huang, Yanqi; He, Lan; Liang, Cuishan; Liang, Changhong; Liu, Zaiyi

    2015-01-01

    Accurate and repeatable measurement of the gross tumour volume(GTV) of subcutaneous xenografts is crucial in the evaluation of anti-tumour therapy. Formula and image-based manual segmentation methods are commonly used for GTV measurement but are hindered by low accuracy and reproducibility. 3D Slicer is open-source software that provides semiautomatic segmentation for GTV measurements. In our study, subcutaneous GTVs from nude mouse xenografts were measured by semiautomatic segmentation with 3D Slicer based on morphological magnetic resonance imaging(mMRI) or diffusion-weighted imaging(DWI)(b = 0,20,800 s/mm2) . These GTVs were then compared with those obtained via the formula and image-based manual segmentation methods with ITK software using the true tumour volume as the standard reference. The effects of tumour size and shape on GTVs measurements were also investigated. Our results showed that, when compared with the true tumour volume, segmentation for DWI(P = 0.060–0.671) resulted in better accuracy than that mMRI(P < 0.001) and the formula method(P < 0.001). Furthermore, semiautomatic segmentation for DWI(intraclass correlation coefficient, ICC = 0.9999) resulted in higher reliability than manual segmentation(ICC = 0.9996–0.9998). Tumour size and shape had no effects on GTV measurement across all methods. Therefore, DWI-based semiautomatic segmentation, which is accurate and reproducible and also provides biological information, is the optimal GTV measurement method in the assessment of anti-tumour treatments. PMID:26489359

  1. The effect of nanowire length and diameter on the properties of transparent, conducting nanowire films

    NASA Astrophysics Data System (ADS)

    Bergin, Stephen M.; Chen, Yu-Hui; Rathmell, Aaron R.; Charbonneau, Patrick; Li, Zhi-Yuan; Wiley, Benjamin J.

    2012-03-01

    This article describes how the dimensions of nanowires affect the transmittance and sheet resistance of a random nanowire network. Silver nanowires with independently controlled lengths and diameters were synthesized with a gram-scale polyol synthesis by controlling the reaction temperature and time. Characterization of films composed of nanowires of different lengths but the same diameter enabled the quantification of the effect of length on the conductance and transmittance of silver nanowire films. Finite-difference time-domain calculations were used to determine the effect of nanowire diameter, overlap, and hole size on the transmittance of a nanowire network. For individual nanowires with diameters greater than 50 nm, increasing diameter increases the electrical conductance to optical extinction ratio, but the opposite is true for nanowires with diameters less than this size. Calculations and experimental data show that for a random network of nanowires, decreasing nanowire diameter increases the number density of nanowires at a given transmittance, leading to improved connectivity and conductivity at high transmittance (>90%). This information will facilitate the design of transparent, conducting nanowire films for flexible displays, organic light emitting diodes and thin-film solar cells.This article describes how the dimensions of nanowires affect the transmittance and sheet resistance of a random nanowire network. Silver nanowires with independently controlled lengths and diameters were synthesized with a gram-scale polyol synthesis by controlling the reaction temperature and time. Characterization of films composed of nanowires of different lengths but the same diameter enabled the quantification of the effect of length on the conductance and transmittance of silver nanowire films. Finite-difference time-domain calculations were used to determine the effect of nanowire diameter, overlap, and hole size on the transmittance of a nanowire network. For individual nanowires with diameters greater than 50 nm, increasing diameter increases the electrical conductance to optical extinction ratio, but the opposite is true for nanowires with diameters less than this size. Calculations and experimental data show that for a random network of nanowires, decreasing nanowire diameter increases the number density of nanowires at a given transmittance, leading to improved connectivity and conductivity at high transmittance (>90%). This information will facilitate the design of transparent, conducting nanowire films for flexible displays, organic light emitting diodes and thin-film solar cells. Electronic supplementary information (ESI) available: Includes methods and transmission spectra of nanowire films. See DOI: 10.1039/c2nr30126a

  2. Evaluation of evaporation coefficient for micro-droplets exposed to low pressure: A semi-analytical approach

    NASA Astrophysics Data System (ADS)

    Chakraborty, Prodyut R.; Hiremath, Kirankumar R.; Sharma, Manvendra

    2017-02-01

    Evaporation rate of water is strongly influenced by energy barrier due to molecular collision and heat transfer limitations. The evaporation coefficient, defined as the ratio of experimentally measured evaporation rate to that maximum possible theoretical limit, varies over a conflicting three orders of magnitude. In the present work, a semi-analytical transient heat diffusion model of droplet evaporation is developed considering the effect of change in droplet size due to evaporation from its surface, when the droplet is injected into vacuum. Negligible effect of droplet size reduction due to evaporation on cooling rate is found to be true. However, the evaporation coefficient is found to approach theoretical limit of unity, when the droplet radius is less than that of mean free path of vapor molecules on droplet surface contrary to the reported theoretical predictions. Evaporation coefficient was found to reduce rapidly when the droplet under consideration has a radius larger than the mean free path of evaporating molecules, confirming the molecular collision barrier to evaporation rate. The trend of change in evaporation coefficient with increasing droplet size predicted by the proposed model will facilitate obtaining functional relation of evaporation coefficient with droplet size, and can be used for benchmarking the interaction between multiple droplets during evaporation in vacuum.

  3. Image Reconstruction for Hybrid True-Color Micro-CT

    PubMed Central

    Xu, Qiong; Yu, Hengyong; Bennett, James; He, Peng; Zainon, Rafidah; Doesburg, Robert; Opie, Alex; Walsh, Mike; Shen, Haiou; Butler, Anthony; Butler, Phillip; Mou, Xuanqin; Wang, Ge

    2013-01-01

    X-ray micro-CT is an important imaging tool for biomedical researchers. Our group has recently proposed a hybrid “true-color” micro-CT system to improve contrast resolution with lower system cost and radiation dose. The system incorporates an energy-resolved photon-counting true-color detector into a conventional micro-CT configuration, and can be used for material decomposition. In this paper, we demonstrate an interior color-CT image reconstruction algorithm developed for this hybrid true-color micro-CT system. A compressive sensing-based statistical interior tomography method is employed to reconstruct each channel in the local spectral imaging chain, where the reconstructed global gray-scale image from the conventional imaging chain served as the initial guess. Principal component analysis was used to map the spectral reconstructions into the color space. The proposed algorithm was evaluated by numerical simulations, physical phantom experiments, and animal studies. The results confirm the merits of the proposed algorithm, and demonstrate the feasibility of the hybrid true-color micro-CT system. Additionally, a “color diffusion” phenomenon was observed whereby high-quality true-color images are produced not only inside the region of interest, but also in neighboring regions. It appears harnessing that this phenomenon could potentially reduce the color detector size for a given ROI, further reducing system cost and radiation dose. PMID:22481806

  4. Effects of Isometric Scaling on Vertical Jumping Performance

    PubMed Central

    Bobbert, Maarten F.

    2013-01-01

    Jump height, defined as vertical displacement in the airborne phase, depends on vertical takeoff velocity. For centuries, researchers have speculated on how jump height is affected by body size and many have adhered to what has come to be known as Borelli’s law, which states that jump height does not depend on body size per se. The underlying assumption is that the amount of work produced per kg body mass during the push-off is independent of size. However, if a big body is isometrically downscaled to a small body, the latter requires higher joint angular velocities to achieve a given takeoff velocity and work production will be more impaired by the force-velocity relationship of muscle. In the present study, the effects of pure isometric scaling on vertical jumping performance were investigated using a biologically realistic model of the human musculoskeletal system. The input of the model, muscle stimulation over time, was optimized using jump height as criterion. It was found that when the human model was miniaturized to the size of a mouse lemur, with a mass of about one-thousandth that of a human, jump height dropped from 40 cm to only 6 cm, mainly because of the force-velocity relationship. In reality, mouse lemurs achieve jump heights of about 33 cm. By implication, the unfavourable effects of the small body size of mouse lemurs on jumping performance must be counteracted by favourable effects of morphological and physiological adaptations. The same holds true for other small jumping animals. The simulations for the first time expose and explain the sheer magnitude of the isolated effects of isometric downscaling on jumping performance, to be counteracted by morphological and physiological adaptations. PMID:23936494

  5. Measles

    MedlinePlus

    ... Issues Listen Español Text Size Email Print Share Measles Page Content Article Body Measles was once a common disease among preschool and ... of growing up. This is no longer true. Measles has not been completely eliminated as a childhood ...

  6. 5-Fluorouracil:carnauba wax microspheres for chemoembolization: an in vitro evaluation.

    PubMed

    Benita, S; Zouai, O; Benoit, J P

    1986-09-01

    5-Fluorouracil:carnauba wax microspheres were prepared using a meltable dispersion process with the aid of a surfactant as a wetting agent. It was noted that only hydrophilic surfactants were able to wet the 5-fluorouracil and substantially increased its content in the microspheres. No marked effect was observed in the particle size distribution of the solid microspheres as a function of the nature of the surfactant. Increasing the stirring rate in the preparation process decreased, first, the mean droplet size of the emulsified melted dispersion in the vehicle during the heating process, and, consequently, the mean particle size of the solidified microspheres during the cooling process. 5-Fluorouracil cumulative release from the microspheres followed first-order kinetics, as shown by nonlinear regression analysis. Although the kinetic results were not indicative of the true release mechanism from a single microsphere, it was believed that 5-fluorouracil release from the microspheres was probably governed by a dissolution process, rather than by a leaching process through the carnauba wax microspheres.

  7. Size effects influence on conducting properties of Cu-Nb alloy microcomposites at cryogenic temperature

    NASA Astrophysics Data System (ADS)

    Guryev, Valentin V.; Polikarpova, Maria V.; Lukyanov, Pavel A.; Khlebova, Natalya E.; Pantsyrny, Viktor I.

    2018-03-01

    A comprehensive study has been carried out in relation to the conductivity of heavily deformed Cu-16wt%Nb nanostructured wires at room and cryogenic temperatures. When the true strain exceeds 5, the growth rates of the resistivity qualitatively change the behavior at all temperatures. It is shown that such behavior is defined mostly by interface scattering. At 10 K the stepwise increasing of resistivity has been found, what is speculated as a feature of amorphous regions formation at the interface of Cu/Nb. Simultaneously the superconducting transition temperature (Tcs) falls down due to proximity effect. The deviation of experimental Tcs values from predicted by classical model is discussed.

  8. Astrophysics in 2001

    NASA Astrophysics Data System (ADS)

    Trimble, Virginia; Aschwanden, Markus J.

    2002-05-01

    During the year, astronomers provided explanations for solar topics ranging from the multiple personality disorder of neutrinos to cannibalism of CMEs (coronal mass ejections) and extra-solar topics including quivering stars, out-of-phase gaseous media, black holes of all sizes (too large, too small, and too medium), and the existence of the universe. Some of these explanations are probably possibly true, though the authors are not betting large sums on any one. The data ought to remain true forever, though this requires a careful definition of ``data'' (think of the Martian canals).

  9. Numerical Large Deviation Analysis of the Eigenstate Thermalization Hypothesis

    NASA Astrophysics Data System (ADS)

    Yoshizawa, Toru; Iyoda, Eiki; Sagawa, Takahiro

    2018-05-01

    A plausible mechanism of thermalization in isolated quantum systems is based on the strong version of the eigenstate thermalization hypothesis (ETH), which states that all the energy eigenstates in the microcanonical energy shell have thermal properties. We numerically investigate the ETH by focusing on the large deviation property, which directly evaluates the ratio of athermal energy eigenstates in the energy shell. As a consequence, we have systematically confirmed that the strong ETH is indeed true even for near-integrable systems. Furthermore, we found that the finite-size scaling of the ratio of athermal eigenstates is a double exponential for nonintegrable systems. Our result illuminates the universal behavior of quantum chaos, and suggests that a large deviation analysis would serve as a powerful method to investigate thermalization in the presence of the large finite-size effect.

  10. Non-parametric methods for cost-effectiveness analysis: the central limit theorem and the bootstrap compared.

    PubMed

    Nixon, Richard M; Wonderling, David; Grieve, Richard D

    2010-03-01

    Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.

  11. The size of a pilot study for a clinical trial should be calculated in relation to considerations of precision and efficiency.

    PubMed

    Sim, Julius; Lewis, Martyn

    2012-03-01

    To investigate methods to determine the size of a pilot study to inform a power calculation for a randomized controlled trial (RCT) using an interval/ratio outcome measure. Calculations based on confidence intervals (CIs) for the sample standard deviation (SD). Based on CIs for the sample SD, methods are demonstrated whereby (1) the observed SD can be adjusted to secure the desired level of statistical power in the main study with a specified level of confidence; (2) the sample for the main study, if calculated using the observed SD, can be adjusted, again to obtain the desired level of statistical power in the main study; (3) the power of the main study can be calculated for the situation in which the SD in the pilot study proves to be an underestimate of the true SD; and (4) an "efficient" pilot size can be determined to minimize the combined size of the pilot and main RCT. Trialists should calculate the appropriate size of a pilot study, just as they should the size of the main RCT, taking into account the twin needs to demonstrate efficiency in terms of recruitment and to produce precise estimates of treatment effect. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Role of entrapped vapor bubbles during microdroplet evaporation

    NASA Astrophysics Data System (ADS)

    Putnam, Shawn A.; Byrd, Larry W.; Briones, Alejandro M.; Hanchak, Michael S.; Ervin, Jamie S.; Jones, John G.

    2012-08-01

    On superheated surfaces, the air bubble trapped during impingement grows into a larger vapor bubble and oscillates at the frequency predicted for thermally induced capillary waves. In some cases, the entrapped vapor bubble penetrates the droplet interface, leaving a micron-sized coffee-ring pattern of pure fluid. Vapor bubble entrapment, however, does not influence the evaporation rate. This is also true on laser heated surfaces, where a laser can thermally excite capillary waves and induce bubble oscillations over a broad range of frequencies, suggesting that exciting perturbations in a pinned droplets interface is not an effective avenue for enhancing evaporative heat transfer.

  13. Effects of eccentricity on color contrast.

    PubMed

    Vanston, John E; Crognale, Michael A

    2018-04-01

    Using near-threshold stimuli, human color sensitivity has been shown to decrease across the visual field, likely due in part to physiological differences between the fovea and periphery. It remains unclear to what extent this holds true for suprathreshold stimuli. The current study used suprathreshold contrast matching to examine how perceived contrast varies with eccentricity along the cardinal axes in a cone-opponent space. Our data show that, despite increasing stimulus size in the periphery, the LM axis stimuli were still perceived as reduced in contrast, whereas the S axis perceived contrast was observed to increase with eccentricity.

  14. Effect of gage size on the measurement of local heat flux. [formulas for determining gage averaging errors

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.; Papell, S. S.

    1973-01-01

    General formulas are derived for determining gage averaging errors of strip-type heat flux meters used in the measurement of one-dimensional heat flux distributions. In addition, a correction procedure is presented which allows a better estimate for the true value of the local heat flux. As an example of the technique, the formulas are applied to the cases of heat transfer to air slot jets impinging on flat and concave surfaces. It is shown that for many practical problems, the use of very small heat flux gages is often unnecessary.

  15. Counting (green) jobs in Queensland's waste and recycling sector.

    PubMed

    Davis, Georgina

    2013-09-01

    The waste and recycling sector has been identified as a green industry and, as such, jobs within this sector may be classed as 'green jobs'. Many governments have seen green jobs as a way of increasing employment, particularly during the global financial crisis. However, the methods used to define and quantify green jobs directly affect the quantification of these green jobs. In December 2010, Queensland introduced a waste strategy that stated intent to increase green jobs within the waste sector. This article discusses the milieu and existing issues associated with quantifying green jobs within Queensland's waste and recycling sector, and provides a review of the survey that has sought to quantify the true size of the Queensland industry sector. This research has identified nearly 5500 jobs in Queensland's private waste management and recycling sector, which indicates that official data do not accurately reflect the true size of the sector.

  16. The earth is flat (p > 0.05): significance thresholds and the crisis of unreplicable research

    PubMed Central

    Korner-Nievergelt, Fränzi; Roth, Tobias

    2017-01-01

    The widespread use of ‘statistical significance’ as a license for making a claim of a scientific finding leads to considerable distortion of the scientific process (according to the American Statistical Association). We review why degrading p-values into ‘significant’ and ‘nonsignificant’ contributes to making studies irreproducible, or to making them seem irreproducible. A major problem is that we tend to take small p-values at face value, but mistrust results with larger p-values. In either case, p-values tell little about reliability of research, because they are hardly replicable even if an alternative hypothesis is true. Also significance (p ≤ 0.05) is hardly replicable: at a good statistical power of 80%, two studies will be ‘conflicting’, meaning that one is significant and the other is not, in one third of the cases if there is a true effect. A replication can therefore not be interpreted as having failed only because it is nonsignificant. Many apparent replication failures may thus reflect faulty judgment based on significance thresholds rather than a crisis of unreplicable research. Reliable conclusions on replicability and practical importance of a finding can only be drawn using cumulative evidence from multiple independent studies. However, applying significance thresholds makes cumulative knowledge unreliable. One reason is that with anything but ideal statistical power, significant effect sizes will be biased upwards. Interpreting inflated significant results while ignoring nonsignificant results will thus lead to wrong conclusions. But current incentives to hunt for significance lead to selective reporting and to publication bias against nonsignificant findings. Data dredging, p-hacking, and publication bias should be addressed by removing fixed significance thresholds. Consistent with the recommendations of the late Ronald Fisher, p-values should be interpreted as graded measures of the strength of evidence against the null hypothesis. Also larger p-values offer some evidence against the null hypothesis, and they cannot be interpreted as supporting the null hypothesis, falsely concluding that ‘there is no effect’. Information on possible true effect sizes that are compatible with the data must be obtained from the point estimate, e.g., from a sample average, and from the interval estimate, such as a confidence interval. We review how confusion about interpretation of larger p-values can be traced back to historical disputes among the founders of modern statistics. We further discuss potential arguments against removing significance thresholds, for example that decision rules should rather be more stringent, that sample sizes could decrease, or that p-values should better be completely abandoned. We conclude that whatever method of statistical inference we use, dichotomous threshold thinking must give way to non-automated informed judgment. PMID:28698825

  17. Oracle estimation of parametric models under boundary constraints.

    PubMed

    Wong, Kin Yau; Goldberg, Yair; Fine, Jason P

    2016-12-01

    In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.

  18. Coarsening and pattern formation during true morphological phase separation in unstable thin films under gravity

    NASA Astrophysics Data System (ADS)

    Kumar, Avanish; Narayanam, Chaitanya; Khanna, Rajesh; Puri, Sanjay

    2017-12-01

    We address in detail the problem of true morphological phase separation (MPS) in three-dimensional or (2 +1 )-dimensional unstable thin liquid films (>100 nm) under the influence of gravity. The free-energy functionals of these films are asymmetric and show two points of common tangency, which facilitates the formation of two equilibrium phases. Three distinct patterns formed by relative preponderance of these phases are clearly identified in "true MPS". Asymmetricity induces two different pathways of pattern formation, viz., defect and direct pathway for true MPS. The pattern formation and phase-ordering dynamics have been studied using statistical measures such as structure factor, correlation function, and growth laws. In the late stage of coarsening, the system reaches into a scaling regime for both pathways, and the characteristic domain size follows the Lifshitz-Slyozov growth law [L (t ) ˜t1 /3] . However, for the defect pathway, there is a crossover of domain growth behavior from L (t ) ˜t1 /4→t1 /3 in the dynamical scaling regime. We also underline the analogies and differences behind the mechanisms of MPS and true MPS in thin liquid films and generic spinodal phase separation in binary mixtures.

  19. Proactive and Retroactive Effects of Negative Suggestion

    ERIC Educational Resources Information Center

    Brown, Alan S.; Brown, Christine M.; Mosbacher, Joy L.; Dryden, W. Erich

    2006-01-01

    The negative effects of false information presented either prior to (proactive interference; PI) or following (retroactive interference; RI) true information was examined with word definitions (Experiment 1) and trivia facts (Experiment 2). Participants were explicitly aware of which information was true and false when shown, and true-false…

  20. Navigating aerial transects with a laptop computer

    USGS Publications Warehouse

    Anthony, R. Michael; Stehn, R.A.

    1994-01-01

    SUMMARY: A comparison is made of different methods of determining size of home range from grid trapping data. Studies of artificial populations show that a boundary strip method of measuring area and an adjusted range length give sizes closer to the true range than do minimum area or observed range length methods. In simulated trapping of artificial populations, the known range size increases with successive captures until a level is reached that approximates the true range. The same general pattern is followed whether traps are visited at random or traps nearer the center of the range are favored; but when central traps are favored the curve levels more slowly. Range size is revealed with fewer captures when traps are far apart than when they are close together. The curve levels more slowly for oblong ranges than for circular ranges of the same area. Fewer captures are required to determine range length than to determine range area. Other examples of simulated trapping in artificial populations are used to provide measurements of distances from the center of activity and distances between successive captures. These are compared with similar measurements taken from Peromyscus trapping data. The similarity of range sizes found in certain field comparisons of area trapping, colored scat collections, and trailing is cited. A comparison of home range data obtained by area trapping and nest box studies is discussed. It is shown that when traps are set too far apart to include two or more in the range of each animal, calculation of average range size gives biased results. The smaller ranges are not expressed and cannot be included in the averages. The result is that range estimates are smaller at closer spacings and greater at wider spacings, purely as a result of these erroneous calculations and not reflecting any varying behavior of the animals. The problem of variation in apparent home range with variation in trap spacing is considered further by trapping in an artificial population. It is found that trap spacing can alter the apparent size of range even when biological factors are excluded and trap visiting is random. The desirability of excluding travels outside the normal range from home range calculations is discussed. Effects of varying the trapping plan by setting alternate rows of traps, or setting two traps per site, are discussed briefly.

  1. A comparison of certain methods of measuring ranges of small mammals

    USGS Publications Warehouse

    Stickel, L.F.

    1954-01-01

    SUMMARY: A comparison is made of different methods of determining size of home range from grid trapping data. Studies of artificial populations show that a boundary strip method of measuring area and an adjusted range length give sizes closer to the true range than do minimum area or observed range length methods. In simulated trapping of artificial populations, the known range size increases with successive captures until a level is reached that approximates the true range. The same general pattern is followed whether traps are visited at random or traps nearer the center of the range are favored; but when central traps are favored the curve levels more slowly. Range size is revealed with fewer captures when traps are far apart than when they are close together. The curve levels more slowly for oblong ranges than for circular ranges of the same area. Fewer captures are required to determine range length than to determine range area. Other examples of simulated trapping in artificial populations are used to provide measurements of distances from the center of activity and distances between successive captures. These are compared with similar measurements taken from Peromyscus trapping data. The similarity of range sizes found in certain field comparisons of area trapping, colored scat collections, and trailing is cited. A comparison of home range data obtained by area trapping and nest box studies is discussed. It is shown that when traps are set too far apart to include two or more in the range of each animal, calculation of average range size gives biased results. The smaller ranges are not expressed and cannot be included in the averages. The result is that range estimates are smaller at closer spacings and greater at wider spacings, purely as a result of these erroneous calculations and not reflecting any varying behavior of the animals. The problem of variation in apparent home range with variation in trap spacing is considered further by trapping in an artificial population. It is found that trap spacing can alter the apparent size of range even when biological factors are excluded and trap visiting is random. The desirability of excluding travels outside the normal range from home range calculations is discussed. Effects of varying the trapping plan by setting alternate rows of traps, or setting two traps per site, are discussed briefly.

  2. BODY SIZE-SPECIFIC EFFECTIVE DOSE CONVERSION COEFFICIENTS FOR CT SCANS.

    PubMed

    Romanyukha, Anna; Folio, Les; Lamart, Stephanie; Simon, Steven L; Lee, Choonsik

    2016-12-01

    Effective dose from computed tomography (CT) examinations is usually estimated using the scanner-provided dose-length product and using conversion factors, also known as k-factors, which correspond to scan regions and differ by age according to five categories: 0, 1, 5, 10 y and adult. However, patients often deviate from the standard body size on which the conversion factor is based. In this study, a method for deriving body size-specific k-factors is presented, which can be determined from a simple regression curve based on patient diameter at the centre of the scan range. Using the International Commission on Radiological Protection reference paediatric and adult computational phantoms paired with Monte Carlo simulation of CT X-ray beams, the authors derived a regression-based k-factor model for the following CT scan types: head-neck, head, neck, chest, abdomen, pelvis, abdomen-pelvis (AP) and chest-abdomen-pelvis (CAP). The resulting regression functions were applied to a total of 105 paediatric and 279 adult CT scans randomly sampled from patients who underwent chest, AP and CAP scans at the National Institutes of Health Clinical Center. The authors have calculated and compared the effective doses derived from the conventional age-specific k-factors with the values computed using their body size-specific k-factor. They found that by using the age-specific k-factor, paediatric patients tend to have underestimates (up to 3-fold) of effective dose, while underweight and overweight adult patients tend to have underestimates (up to 2.6-fold) and overestimates (up to 4.6-fold) of effective dose, respectively, compared with the effective dose determined from their body size-dependent factors. The authors present these size-specific k-factors as an alternative to the existing age-specific factors. The body size-specific k-factor will assess effective dose more precisely and on a more individual level than the conventional age-specific k-factors and, hence, improve awareness of the true exposure, which is important for the clinical community to understand. Published by Oxford University Press 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  3. Can Effective Synthetic Vision System Displays be Implemented on Limited Size Display Spaces?

    NASA Technical Reports Server (NTRS)

    Comstock, J. Raymond, Jr.; Glaab, Lou J.; Prinzel, Lance J.; Elliott, Dawn M.

    2004-01-01

    The Synthetic Vision Systems (SVS) element of the NASA Aviation Safety Program is striving to eliminate poor visibility as a causal factor in aircraft accidents, and to enhance operational capabilities of all types or aircraft. To accomplish these safety and situation awareness improvements, the SVS concepts are designed to provide a clear view of the world ahead through the display of computer generated imagery derived from an onboard database of terrain, obstacle and airport information. An important issue for the SVS concept is whether useful and effective Synthetic Vision System (SVS) displays can be implemented on limited size display spaces as would be required to implement this technology on older aircraft with physically smaller instrument spaces. In this study, prototype SVS displays were put on the following display sizes: (a) size "A' (e.g. 757 EADI), (b) form factor "D" (e.g. 777 PFD), and (c) new size "X" (Rectangular flat-panel, approximately 20 x 25 cm). Testing was conducted in a high-resolution graphics simulation facility at NASA Langley Research Center. Specific issues under test included the display size as noted above, the field-of-view (FOV) to be shown on the display and directly related to FOV is the degree of minification of the displayed image or picture. Using simulated approaches with display size and FOV conditions held constant no significant differences by these factors were found. Preferred FOV based on performance was determined by using approaches during which pilots could select FOV. Mean preference ratings for FOV were in the following order: (1) 30 deg., (2) Unity, (3) 60 deg., and (4) 90 deg., and held true for all display sizes tested. Limitations of the present study and future research directions are discussed.

  4. A validation of 11 body-condition indices in a giant snake species that exhibits positive allometry.

    PubMed

    Falk, Bryan G; Snow, Ray W; Reed, Robert N

    2017-01-01

    Body condition is a gauge of the energy stores of an animal, and though it has important implications for fitness, survival, competition, and disease, it is difficult to measure directly. Instead, body condition is frequently estimated as a body condition index (BCI) using length and mass measurements. A desirable BCI should accurately reflect true body condition and be unbiased with respect to size (i.e., mean BCI estimates should not change across different length or mass ranges), and choosing the most-appropriate BCI is not straightforward. We evaluated 11 different BCIs in 248 Burmese pythons (Python bivittatus), organisms that, like other snakes, exhibit simple body plans well characterized by length and mass. We found that the length-mass relationship in Burmese pythons is positively allometric, where mass increases rapidly with respect to length, and this allowed us to explore the effects of allometry on BCI verification. We employed three alternative measures of 'true' body condition: percent fat, scaled fat, and residual fat. The latter two measures mostly accommodated allometry in true body condition, but percent fat did not. Our inferences of the best-performing BCIs depended heavily on our measure of true body condition, with most BCIs falling into one of two groups. The first group contained most BCIs based on ratios, and these were associated with percent fat and body length (i.e., were biased). The second group contained the scaled mass index and most of the BCIs based on linear regressions, and these were associated with both scaled and residual fat but not body length (i.e., were unbiased). Our results show that potential differences in measures of true body condition should be explored in BCI verification studies, particularly in organisms undergoing allometric growth. Furthermore, the caveats of each BCI and similarities to other BCIs are important to consider when determining which BCI is appropriate for any particular taxon.

  5. Can we estimate molluscan abundance and biomass on the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.

    2017-11-01

    Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.

  6. Précis of statistical significance: rationale, validity, and utility.

    PubMed

    Chow, S L

    1998-04-01

    The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.

  7. Determination of Flaw Size from Thermographic Data

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Howell, Patricia A.; Zalameda, Joseph N.

    2014-01-01

    Conventional methods for reducing the pulsed thermographic responses of delaminations tend to overestimate the size of the flaw. Since the heat diffuses in the plane parallel to the surface, the resulting temperature profile over the flaw is larger than the flaw. A variational method is presented for reducing the thermographic data to produce an estimated size for the flaw that is much closer to the true size of the flaw. The size is determined from the spatial thermal response of the exterior surface above the flaw and a constraint on the length of the contour surrounding the flaw. The technique is applied to experimental data acquired on a flat bottom hole composite specimen.

  8. Occupancy in continuous habitat

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.

    2012-01-01

    The probability that a site has at least one individual of a species ('occupancy') has come to be widely used as a state variable for animal population monitoring. The available statistical theory for estimation when detection is imperfect applies particularly to habitat patches or islands, although it is also used for arbitrary plots in continuous habitat. The probability that such a plot is occupied depends on plot size and home-range characteristics (size, shape and dispersion) as well as population density. Plot size is critical to the definition of occupancy as a state variable, but clear advice on plot size is missing from the literature on the design of occupancy studies. We describe models for the effects of varying plot size and home-range size on expected occupancy. Temporal, spatial, and species variation in average home-range size is to be expected, but information on home ranges is difficult to retrieve from species presence/absence data collected in occupancy studies. The effect of variable home-range size is negligible when plots are very large (>100 x area of home range), but large plots pose practical problems. At the other extreme, sampling of 'point' plots with cameras or other passive detectors allows the true 'proportion of area occupied' to be estimated. However, this measure equally reflects home-range size and density, and is of doubtful value for population monitoring or cross-species comparisons. Plot size is ill-defined and variable in occupancy studies that detect animals at unknown distances, the commonest example being unlimited-radius point counts of song birds. We also find that plot size is ill-defined in recent treatments of "multi-scale" occupancy; the respective scales are better interpreted as temporal (instantaneous and asymptotic) rather than spatial. Occupancy is an inadequate metric for population monitoring when it is confounded with home-range size or detection distance.

  9. A nine-country study of the protein content and amino acid composition of mature human milk

    PubMed Central

    Feng, Ping; Gao, Ming; Burgher, Anita; Zhou, Tian Hui; Pramuk, Kathryn

    2016-01-01

    Background Numerous studies have evaluated protein and amino acid levels in human milk. However, research in this area has been limited by small sample sizes and study populations with little ethnic or racial diversity. Objective Evaluate the protein and amino acid composition of mature (≥30 days) human milk samples collected from a large, multinational study using highly standardized methods for sample collection, storage, and analysis. Design Using a single, centralized laboratory, human milk samples from 220 women (30–188 days postpartum) from nine countries were analyzed for amino acid composition using Waters AccQ-Tag high-performance liquid chromatography and total nitrogen content using the LECO FP-528 nitrogen analyzer. Total protein was calculated as total nitrogen×6.25. True protein, which includes protein, free amino acids, and peptides, was calculated from the total amino acids. Results Mean total protein from individual countries (standard deviation [SD]) ranged from 1,133 (125.5) to 1,366 (341.4) mg/dL; the mean across all countries (SD) was 1,192 (200.9) mg/dL. Total protein, true protein, and amino acid composition were not significantly different across countries except Chile, which had higher total and true protein. Amino acid profiles (percent of total amino acids) did not differ across countries. Total and true protein concentrations and 16 of 18 amino acid concentrations declined with the stage of lactation. Conclusions Total protein, true protein, and individual amino acid concentrations in human milk steadily decline from 30 to 151 days of lactation, and are significantly higher in the second month of lactation compared with the following 4 months. There is a high level of consistency in the protein content and amino acid composition of human milk across geographic locations. The size and diversity of the study population and highly standardized procedures for the collection, storage, and analysis of human milk support the validity and broad application of these findings. PMID:27569428

  10. Do citations and impact factors relate to the real numbers in publications? A case study of citation rates, impact, and effect sizes in ecology and evolutionary biology.

    PubMed

    Lortie, Christopher J; Aarssen, Lonnie W; Budden, Amber E; Leimu, Roosa

    2013-02-01

    Metrics of success or impact in academia may do more harm than good. To explore the value of citations, the reported efficacy of treatments in ecology and evolution from close to 1,500 publications was examined. If citation behavior is rationale, i.e. studies that successfully applied a treatment and detected greater biological effects are cited more frequently, then we predict that larger effect sizes increases study relative citation rates. This prediction was not supported. Citations are likely thus a poor proxy for the quantitative merit of a given treatment in ecology and evolutionary biology-unlike evidence-based medicine wherein the success of a drug or treatment on human health is one of the critical attributes. Impact factor of the journal is a broader metric, as one would expect, but it also unrelated to the mean effect sizes for the respective populations of publications. The interpretation by the authors of the treatment effects within each study differed depending on whether the hypothesis was supported or rejected. Significantly larger effect sizes were associated with rejection of a hypothesis. This suggests that only the most rigorous studies reporting negative results are published or that authors set a higher burden of proof in rejecting a hypothesis. The former is likely true to a major extent since only 29 % of the studies rejected the hypotheses tested. These findings indicate that the use of citations to identify important papers in this specific discipline-at least in terms of designing a new experiment or contrasting treatments-is of limited value.

  11. Feeling the future: A meta-analysis of 90 experiments on the anomalous anticipation of random future events.

    PubMed

    Bem, Daryl; Tressoldi, Patrizio; Rabeyron, Thomas; Duggan, Michael

    2015-01-01

    In 2011, one of the authors (DJB) published a report of nine experiments in the Journal of Personality and Social Psychology purporting to demonstrate that an individual's cognitive and affective responses can be influenced by randomly selected stimulus events that do not occur until after his or her responses have already been made and recorded, a generalized variant of the phenomenon traditionally denoted by the term precognition. To encourage replications, all materials needed to conduct them were made available on request. We here report a meta-analysis of 90 experiments from 33 laboratories in 14 countries which yielded an overall effect greater than 6 sigma, z = 6.40, p = 1.2 × 10 (-10 ) with an effect size (Hedges' g) of 0.09. A Bayesian analysis yielded a Bayes Factor of 5.1 × 10 (9), greatly exceeding the criterion value of 100 for "decisive evidence" in support of the experimental hypothesis. When DJB's original experiments are excluded, the combined effect size for replications by independent investigators is 0.06, z = 4.16, p = 1.1 × 10 (-5), and the BF value is 3,853, again exceeding the criterion for "decisive evidence." The number of potentially unretrieved experiments required to reduce the overall effect size of the complete database to a trivial value of 0.01 is 544, and seven of eight additional statistical tests support the conclusion that the database is not significantly compromised by either selection bias or by intense " p-hacking"-the selective suppression of findings or analyses that failed to yield statistical significance. P-curve analysis, a recently introduced statistical technique, estimates the true effect size of the experiments to be 0.20 for the complete database and 0.24 for the independent replications, virtually identical to the effect size of DJB's original experiments (0.22) and the closely related "presentiment" experiments (0.21). We discuss the controversial status of precognition and other anomalous effects collectively known as psi.

  12. Multiscale registration algorithm for alignment of meshes

    NASA Astrophysics Data System (ADS)

    Vadde, Srikanth; Kamarthi, Sagar V.; Gupta, Surendra M.

    2004-03-01

    Taking a multi-resolution approach, this research work proposes an effective algorithm for aligning a pair of scans obtained by scanning an object's surface from two adjacent views. This algorithm first encases each scan in the pair with an array of cubes of equal and fixed size. For each scan in the pair a surrogate scan is created by the centroids of the cubes that encase the scan. The Gaussian curvatures of points across the surrogate scan pair are compared to find the surrogate corresponding points. If the difference between the Gaussian curvatures of any two points on the surrogate scan pair is less than a predetermined threshold, then those two points are accepted as a pair of surrogate corresponding points. The rotation and translation values between the surrogate scan pair are determined by using a set of surrogate corresponding points. Using the same rotation and translation values the original scan pairs are aligned. The resulting registration (or alignment) error is computed to check the accuracy of the scan alignment. When the registration error becomes acceptably small, the algorithm is terminated. Otherwise the above process is continued with cubes of smaller and smaller sizes until the algorithm is terminated. However at each finer resolution the search space for finding the surrogate corresponding points is restricted to the regions in the neighborhood of the surrogate points that were at found at the preceding coarser level. The surrogate corresponding points, as the resolution becomes finer and finer, converge to the true corresponding points on the original scans. This approach offers three main benefits: it improves the chances of finding the true corresponding points on the scans, minimize the adverse effects of noise in the scans, and reduce the computational load for finding the corresponding points.

  13. Effects of true density, compacted mass, compression speed, and punch deformation on the mean yield pressure.

    PubMed

    Gabaude, C M; Guillot, M; Gautier, J C; Saudemon, P; Chulia, D

    1999-07-01

    Compressibility properties of pharmaceutical materials are widely characterized by measuring the volume reduction of a powder column under pressure. Experimental data are commonly analyzed using the Heckel model from which powder deformation mechanisms are determined using mean yield pressure (Py). Several studies from the literature have shown the effects of operating conditions on the determination of Py and have pointed out the limitations of this model. The Heckel model requires true density and compacted mass values to determine Py from force-displacement data. It is likely that experimental errors will be introduced when measuring the true density and compacted mass. This study investigates the effects of true density and compacted mass on Py. Materials having different particle deformation mechanisms are studied. Punch displacement and applied pressure are measured for each material at two compression speeds. For each material, three different true density and compacted mass values are utilized to evaluate their effect on Py. The calculated variation of Py reaches 20%. This study demonstrates that the errors in measuring true density and compacted mass have a greater effect on Py than the errors incurred from not correcting the displacement measurements due to punch elasticity.

  14. Energetic constraints, size gradients, and size limits in benthic marine invertebrates.

    PubMed

    Sebens, Kenneth P

    2002-08-01

    Populations of marine benthic organisms occupy habitats with a range of physical and biological characteristics. In the intertidal zone, energetic costs increase with temperature and aerial exposure, and prey intake increases with immersion time, generating size gradients with small individuals often found at upper limits of distribution. Wave action can have similar effects, limiting feeding time or success, although certain species benefit from wave dislodgment of their prey; this also results in gradients of size and morphology. The difference between energy intake and metabolic (and/or behavioral) costs can be used to determine an energetic optimal size for individuals in such populations. Comparisons of the energetic optimal size to the maximum predicted size based on mechanical constraints, and the ensuing mortality schedule, provides a mechanism to study and explain organism size gradients in intertidal and subtidal habitats. For species where the energetic optimal size is well below the maximum size that could persist under a certain set of wave/flow conditions, it is probable that energetic constraints dominate. When the opposite is true, populations of small individuals can dominate habitats with strong dislodgment or damage probability. When the maximum size of individuals is far below either energetic optima or mechanical limits, other sources of mortality (e.g., predation) may favor energy allocation to early reproduction rather than to continued growth. Predictions based on optimal size models have been tested for a variety of intertidal and subtidal invertebrates including sea anemones, corals, and octocorals. This paper provides a review of the optimal size concept, and employs a combination of the optimal energetic size model and life history modeling approach to explore energy allocation to growth or reproduction as the optimal size is approached.

  15. Analysis of source data resolution on photogrammetric products quality of architectural object. (Polish Title: Analiza wpęywu rozdzielczości danych śródłowych na jakość produktów fotogrametrycznych obiektu architektury)

    NASA Astrophysics Data System (ADS)

    Markiewicz, J. S.; Kowalczyk, M.; Podlasiak, P.; Bakuła, K.; Zawieska, D.; Bujakiewicz, A.; Andrzejewska, E.

    2013-12-01

    Due to considerable development of the non - invasion measurement technologies, taking advantages from the distance measurement, the possibility of data acquisition increased and at the same time the measurement period has been reduced. This, by combination of close range laser scanning data and images, enabled the wider expansion of photogrammetric methods effectiveness in registration and analysis of cultural heritage objects. Mentioned integration allows acquisition of objects three - dimensional models and in addition digital image maps - true - ortho and vector products. The quality of photogrammetric products is defined by accuracy and the range of content, therefore by number and the minuteness of detail. That always depends on initial data geometrical resolution. The research results presented in the following paper concern the quality valuation of two products, image of true - ortho and vector data, created for selected parts of architectural object. Source data is represented by point collection i n cloud, acquired from close range laser scanning and photo images. Both data collections has been acquired with diversified resolutions. The exterior orientation of images and several versions of the true - ortho are based on numeric models of the object, acquired with specified resolutions. The comparison of these products gives the opportunity to rate the influence of initial data resolution on their quality (accuracy, information volume). Additional analysis will be performed on the base of vector product s comparison, acquired from monoplotting and true - ortho images. As a conclusion of experiment it was proved that geometric resolution has significant impact on the possibility of generation and on the accuracy of relative orientation TLS scans. If creation of high - resolution products is considered, scanning resolution of about 2 mm should be applied and in case of architecture details - 1 mm. It was also noted that scanning angle and object structure has significant influence on accuracy and completeness of the data. For creation of true - orthoimages for architecture purposes high - resolution ground - based images in geometry close to normal case are recommended to improve their quality. The use of grayscale true - orthoimages with values from scanner intensity is not advised. Presented research proved also that accuracy of manual and automated vectorisation results depend significantly on the resolution of the generated orthoimages (scans and images resolution) and mainly of blur effect and possible pixel size.

  16. Learned control over spinal nociception in patients with chronic back pain.

    PubMed

    Krafft, S; Göhmann, H-D; Sommer, J; Straube, A; Ruscheweyh, R

    2017-10-01

    Descending pain inhibition suppresses spinal nociception, reducing nociceptive input to the brain. It is modulated by cognitive and emotional processes. In subjects with chronic pain, it is impaired, possibly contributing to pain persistence. A previously developed feedback method trains subjects to activate their descending inhibition. Participants are trained to use cognitive-emotional strategies to reduce their spinal nociception, as quantified by the nociceptive flexor reflex (RIII reflex), under visual feedback about their RIII reflex size. The aim of the present study was to test whether also subjects with chronic back pain can achieve a modulation of their descending pain inhibition under RIII feedback. In total, 33 subjects with chronic back pain received either true (n = 18) or sham RIII feedback (n = 15), 15 healthy control subjects received true RIII feedback. All three groups achieved significant RIII suppression, largest in controls (to 76 ± 26% of baseline), intermediate in chronic back pain subjects receiving true feedback (to 82 ± 13%) and smallest in chronic back pain subjects receiving sham feedback (to 89 ± 14%, all p < 0.05). However, only chronic pain subjects receiving true feedback significantly improved their descending inhibition over the feedback training, quantified by the conditioned pain modulation effect (test pain reduction of baseline before training: to 98 ± 26%, after: to 80 ± 21%, p < 0.01). Our results show that subjects with chronic back pain can achieve a reduction of their spinal nociception and improve their descending pain inhibition under RIII feedback training. Subjects with chronic back pain can learn to control their spinal nociception, quantified by the RIII reflex, when they receive feedback about the RIII reflex. © 2017 European Pain Federation - EFIC®.

  17. The Effect of Surface Electrical Stimulation on Vocal Fold Position

    PubMed Central

    Humbert, Ianessa A.; Poletto, Christopher J.; Saxon, Keith G.; Kearney, Pamela R.; Ludlow, Christy L.

    2008-01-01

    Objectives/Hypothesis Closure of the true and false vocal folds is a normal part of airway protection during swallowing. Individuals with reduced or delayed true vocal fold closure can be at risk for aspiration and benefit from intervention to ameliorate the problem. Surface electrical stimulation is currently used during therapy for dysphagia, despite limited knowledge of its physiological effects. Design Prospective single effects study. Methods The immediate physiological effect of surface stimulation on true vocal fold angle was examined at rest in 27 healthy adults using ten different electrode placements on the submental and neck regions. Fiberoptic nasolaryngoscopic recordings during passive inspiration were used to measure change in true vocal fold angle with stimulation. Results Vocal fold angles changed only to a small extent during two electrode placements (p ≤ 0.05). When two sets of electrodes were placed vertically on the neck the mean true vocal fold abduction was 2.4 degrees; while horizontal placements of electrodes in the submental region produced a mean adduction of 2.8 degrees (p=0.03). Conclusions Surface electrical stimulation to the submental and neck regions does not produce immediate true vocal fold adduction adequate for airway protection during swallowing and one position may produce a slight increase in true vocal fold opening. PMID:18043496

  18. True Local Recurrences after Breast Conserving Surgery have Poor Prognosis in Patients with Early Breast Cancer

    PubMed Central

    Sarsenov, Dauren; Ilgun, Serkan; Ordu, Cetin; Alco, Gul; Bozdogan, Atilla; Elbuken, Filiz; Nur Pilanci, Kezban; Agacayak, Filiz; Erdogan, Zeynep; Eralp, Yesim; Dincer, Maktav

    2016-01-01

    Background: This study was aimed at investigating clinical and histopathologic features of ipsilateral breast tumor recurrences (IBTR) and their effects on survival after breast conservation therapy. Methods: 1,400 patients who were treated between 1998 and 2007 and had breast-conserving surgery (BCS) for early breast cancer (cT1-2/N0-1/M0) were evaluated. Demographic and pathologic parameters, radiologic data, treatment, and follow-up related features of the patients were recorded. Results: 53 patients (3.8%) had IBTR after BCS within a median follow-up of 70 months. The mean age was 45.7 years (range, 27-87 years), and 22 patients (41.5%) were younger than 40 years. 33 patients (62.3%) had true recurrence (TR) and 20 were classified as new primary (NP). The median time to recurrence was shorter in TR group than in NP group (37.0 (6-216) and 47.5 (11-192) months respectively; p = 0.338). Progesterone receptor positivity was significantly higher in the NP group (p = 0.005). The overall 5-year survival rate in the NP group (95.0%) was significantly higher than that of the TR group (74.7%, p < 0.033). Multivariate analysis showed that younger age (<40 years), large tumor size (>20 mm), high grade tumor and triple-negative molecular phenotype along with developing TR negatively affected overall survival (hazard ratios were 4.2 (CI 0.98-22.76), 4.6 (CI 1.07-13.03), 4.0 (CI 0.68-46.10), 6.5 (CI 0.03-0.68), and 6.5 (CI 0.02- 0.80) respectively, p < 0.05). Conclusions: Most of the local recurrences after BCS in our study were true recurrences, which resulted in a poorer outcome as compared to new primary tumors. Moreover, younger age (<40), large tumor size (>2 cm), high grade, triple negative phenotype, and having true recurrence were identified as independent prognostic factors with a negative impact on overall survival in this dataset of patients with recurrent breast cancer. In conjunction with a more intensive follow-up program, the role of adjuvant therapy strategies should be explored further in young patients with large and high-risk tumors to reduce the risk of TR. PMID:27158571

  19. Integrative approach for inference of gene regulatory networks using lasso-based random featuring and application to psychiatric disorders.

    PubMed

    Kim, Dongchul; Kang, Mingon; Biswas, Ashis; Liu, Chunyu; Gao, Jean

    2016-08-10

    Inferring gene regulatory networks is one of the most interesting research areas in the systems biology. Many inference methods have been developed by using a variety of computational models and approaches. However, there are two issues to solve. First, depending on the structural or computational model of inference method, the results tend to be inconsistent due to innately different advantages and limitations of the methods. Therefore the combination of dissimilar approaches is demanded as an alternative way in order to overcome the limitations of standalone methods through complementary integration. Second, sparse linear regression that is penalized by the regularization parameter (lasso) and bootstrapping-based sparse linear regression methods were suggested in state of the art methods for network inference but they are not effective for a small sample size data and also a true regulator could be missed if the target gene is strongly affected by an indirect regulator with high correlation or another true regulator. We present two novel network inference methods based on the integration of three different criteria, (i) z-score to measure the variation of gene expression from knockout data, (ii) mutual information for the dependency between two genes, and (iii) linear regression-based feature selection. Based on these criterion, we propose a lasso-based random feature selection algorithm (LARF) to achieve better performance overcoming the limitations of bootstrapping as mentioned above. In this work, there are three main contributions. First, our z score-based method to measure gene expression variations from knockout data is more effective than similar criteria of related works. Second, we confirmed that the true regulator selection can be effectively improved by LARF. Lastly, we verified that an integrative approach can clearly outperform a single method when two different methods are effectively jointed. In the experiments, our methods were validated by outperforming the state of the art methods on DREAM challenge data, and then LARF was applied to inferences of gene regulatory network associated with psychiatric disorders.

  20. Role of Computer Aided Diagnosis (CAD) in the detection of pulmonary nodules on 64 row multi detector computed tomography.

    PubMed

    Prakashini, K; Babu, Satish; Rajgopal, K V; Kokila, K Raja

    2016-01-01

    To determine the overall performance of an existing CAD algorithm with thin-section computed tomography (CT) in the detection of pulmonary nodules and to evaluate detection sensitivity at a varying range of nodule density, size, and location. A cross-sectional prospective study was conducted on 20 patients with 322 suspected nodules who underwent diagnostic chest imaging using 64-row multi-detector CT. The examinations were evaluated on reconstructed images of 1.4 mm thickness and 0.7 mm interval. Detection of pulmonary nodules, initially by a radiologist of 2 years experience (RAD) and later by CAD lung nodule software was assessed. Then, CAD nodule candidates were accepted or rejected accordingly. Detected nodules were classified based on their size, density, and location. The performance of the RAD and CAD system was compared with the gold standard that is true nodules confirmed by consensus of senior RAD and CAD together. The overall sensitivity and false-positive (FP) rate of CAD software was calculated. Of the 322 suspected nodules, 221 were classified as true nodules on the consensus of senior RAD and CAD together. Of the true nodules, the RAD detected 206 (93.2%) and 202 (91.4%) by the CAD. CAD and RAD together picked up more number of nodules than either CAD or RAD alone. Overall sensitivity for nodule detection with the CAD program was 91.4%, and FP detection per patient was 5.5%. The CAD showed comparatively higher sensitivity for nodules of size 4-10 mm (93.4%) and nodules in hilar (100%) and central (96.5%) location when compared to RAD's performance. CAD performance was high in detecting pulmonary nodules including the small size and low-density nodules. CAD even with relatively high FP rate, assists and improves RAD's performance as a second reader, especially for nodules located in the central and hilar region and for small nodules by saving RADs time.

  1. The effects of inter-cavity separation on optical coupling in dielectric bispheres.

    PubMed

    Ashili, Shashanka P; Astratov, Vasily N; Sykes, E Charles H

    2006-10-02

    The optical coupling between two size-mismatched spheres was studied by using one sphere as a local source of light with whispering gallery modes (WGMs) and detecting the intensity of the light scattered by a second sphere playing the part of a receiver of electromagnetic energy. We developed techniques to control inter-cavity gap sizes between microspheres with ~30nm accuracy. We demonstrate high efficiencies (up to 0.2-0.3) of coupling between two separated cavities with strongly detuned eigenstates. At small separations (<1 microm) between the spheres, the mechanism of coupling is interpreted in terms of the Fano resonance between discrete level (true WGMs excited in a source sphere) and a continuum of "quasi"-WGMs with distorted shape which can be induced in the receiving sphere. At larger separations the spectra detected from the receiving sphere originate from scattering of the radiative modes.

  2. Classical electromagnetic fields from quantum sources in heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Holliday, Robert; McCarty, Ryan; Peroutka, Balthazar; Tuchin, Kirill

    2017-01-01

    Electromagnetic fields are generated in high energy nuclear collisions by spectator valence protons. These fields are traditionally computed by integrating the Maxwell equations with point sources. One might expect that such an approach is valid at distances much larger than the proton size and thus such a classical approach should work well for almost the entire interaction region in the case of heavy nuclei. We argue that, in fact, the contrary is true: due to the quantum diffusion of the proton wave function, the classical approximation breaks down at distances of the order of the system size. We compute the electromagnetic field created by a charged particle described initially as a Gaussian wave packet of width 1 fm and evolving in vacuum according to the Klein-Gordon equation. We completely neglect the medium effects. We show that the dynamics, magnitude and even sign of the electromagnetic field created by classical and quantum sources are different.

  3. Magnetization reversal mechanism for Co nanoparticles revealed by a magnetic hysteresis scaling technique

    NASA Astrophysics Data System (ADS)

    Kobayashi, Satoru; Sato, Takuma; Li, Zhang; Dong, Xing-Long; Murakami, Takeshi

    2018-05-01

    We report results of magnetic hysteresis scaling of minor loops for cobalt nanoparticles with variable mean particle size of 53 and 95 nm. A power-law scaling with an exponent of 1.40±0.05 was found to hold true between minor-loop remanence and hysteresis loss in the wide temperature range of 10 - 300 K, irrespective of particle size and cooling field. A coefficient deduced from the scaling law steeply increases with decreasing temperature and exhibits a cooling field dependence below T ˜ 150 K. The value obtained after field cooling at 5 T was lower than that after zero-field cooling, being opposite to a behavior of major-loop coercivity. These observations were explained from the viewpoint of the exchange coupling between ferromagnetic Co core and antiferromagnetic CoO shell, which becomes effective below T ˜ 150 K.

  4. Methodological characteristics and treatment effect sizes in oral health randomised controlled trials: Is there a relationship? Protocol for a meta-epidemiological study

    PubMed Central

    Saltaji, Humam; Armijo-Olivo, Susan; Cummings, Greta G; Amin, Maryam; Flores-Mir, Carlos

    2014-01-01

    Introduction It is fundamental that randomised controlled trials (RCTs) are properly conducted in order to reach well-supported conclusions. However, there is emerging evidence that RCTs are subject to biases which can overestimate or underestimate the true treatment effect, due to flaws in the study design characteristics of such trials. The extent to which this holds true in oral health RCTs, which have some unique design characteristics compared to RCTs in other health fields, is unclear. As such, we aim to examine the empirical evidence quantifying the extent of bias associated with methodological and non-methodological characteristics in oral health RCTs. Methods and analysis We plan to perform a meta-epidemiological study, where a sample size of 60 meta-analyses (MAs) including approximately 600 RCTs will be selected. The MAs will be randomly obtained from the Oral Health Database of Systematic Reviews using a random number table; and will be considered for inclusion if they include a minimum of five RCTs, and examine a therapeutic intervention related to one of the recognised dental specialties. RCTs identified in selected MAs will be subsequently included if their study design includes a comparison between an intervention group and a placebo group or another intervention group. Data will be extracted from selected trials included in MAs based on a number of methodological and non-methodological characteristics. Moreover, the risk of bias will be assessed using the Cochrane Risk of Bias tool. Effect size estimates and measures of variability for the main outcome will be extracted from each RCT included in selected MAs, and a two-level analysis will be conducted using a meta-meta-analytic approach with a random effects model to allow for intra-MA and inter-MA heterogeneity. Ethics and dissemination The intended audiences of the findings will include dental clinicians, oral health researchers, policymakers and graduate students. The aforementioned will be introduced to the findings through workshops, seminars, round table discussions and targeted individual meetings. Other opportunities for knowledge transfer will be pursued such as key dental conferences. Finally, the results will be published as a scientific report in a dental peer-reviewed journal. PMID:24568962

  5. Not my "type": larval dispersal dimorphisms and bet-hedging in opisthobranch life histories.

    PubMed

    Krug, Patrick J

    2009-06-01

    When conditions fluctuate unpredictably, selection may favor bet-hedging strategies that vary offspring characteristics to avoid reproductive wipe-outs in bad seasons. For many marine gastropods, the dispersal potential of offspring reflects both maternal effects (egg size, egg mass properties) and larval traits (development rate, habitat choice). I present data for eight sea slugs in the genus Elysia (Opisthobranchia: Sacoglossa), highlighting potentially adaptive variation in traits like offspring size, timing of metamorphosis, hatching behavior, and settlement response. Elysia zuleicae produced both planktotrophic and lecithotrophic larvae, a true case of poecilogony. Both intracapsular and post-hatching metamorphosis occurred among clutches of "Boselia" marcusi, E. cornigera, and E. crispata, a dispersal dimorphism often misinterpreted as poecilogony. Egg masses of E. tuca hatched for up to 16 days but larvae settled only on the adult host alga Halimeda, whereas most larvae of E. papillosa spontaneously metamorphosed 5-7 days after hatching. Investment in extra-capsular yolk may allow mothers to increase larval size relative to egg size and vary offspring size within and among clutches. Flexible strategies of larval dispersal and offspring provisioning in Elysia spp. may represent adaptations to the patchy habitat of these specialized herbivores, highlighting the evolutionary importance of variation in a range of life-history traits.

  6. Impact of particle concentration and out-of-range sizes on the measurements of the LISST

    NASA Astrophysics Data System (ADS)

    Zhao, Lin; Boufadel, Michel C.; King, Thomas; Robinson, Brian; Conmy, Robyn; Lee, Kenneth

    2018-05-01

    The instrument LISST (laser in situ scattering and transmissiometry) has been widely used for measuring the size of oil droplets in relation to oil spills and sediment particles. Major concerns associated with using the instrument include the impact of high concentrations and/or out-of-range particle (droplet) sizes on the LISST reading. These were evaluated experimentally in this study using monosized microsphere particles. The key findings include: (1) When high particle concentration reduced the optical transmission (OT) to below 30%, the measured peak value tended to underestimate the true peak value, and the accuracy of the LISST decreased by ~8% to ~28%. The maximum concentration to reach the 30% OT was about 50% of the theoretical values, suggesting a lower concentration level should be considered during the instrument deployment. (2) The out-of-range sizes of particles affected the LISST measurements when the sizes were close to the LISST measurement range. Fine below-range sizes primarily affected the data in the lowest two bins of the LISST with  >75% of the volume at the smallest bin. Large out-of-range particles affected the sizes of the largest 8–10 bins only when very high concentration was present. The out-of-range particles slightly changed the size distribution of the in-range particles, but their concentration was conserved. An approach to interpret and quantify the effects of the out-of-range particles on the LISST measurement was proposed.

  7. Estimation of treatment effect in a subpopulation: An empirical Bayes approach.

    PubMed

    Shen, Changyu; Li, Xiaochun; Jeong, Jaesik

    2016-01-01

    It is well recognized that the benefit of a medical intervention may not be distributed evenly in the target population due to patient heterogeneity, and conclusions based on conventional randomized clinical trials may not apply to every person. Given the increasing cost of randomized trials and difficulties in recruiting patients, there is a strong need to develop analytical approaches to estimate treatment effect in subpopulations. In particular, due to limited sample size for subpopulations and the need for multiple comparisons, standard analysis tends to yield wide confidence intervals of the treatment effect that are often noninformative. We propose an empirical Bayes approach to combine both information embedded in a target subpopulation and information from other subjects to construct confidence intervals of the treatment effect. The method is appealing in its simplicity and tangibility in characterizing the uncertainty about the true treatment effect. Simulation studies and a real data analysis are presented.

  8. 40 CFR 113.1 - Purpose.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 22 2014-07-01 2013-07-01 true Purpose. 113.1 Section 113.1 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS LIABILITY LIMITS FOR SMALL ONSHORE STORAGE FACILITIES Oil Storage Facilities § 113.1 Purpose. This subpart establishes size...

  9. 320-row CT renal perfusion imaging in patients with aortic dissection: A preliminary study.

    PubMed

    Liu, Dongting; Liu, Jiayi; Wen, Zhaoying; Li, Yu; Sun, Zhonghua; Xu, Qin; Fan, Zhanming

    2017-01-01

    To investigate the clinical value of renal perfusion imaging in patients with aortic dissection (AD) using 320-row computed tomography (CT), and to determine the relationship between renal CT perfusion imaging and various factors of aortic dissection. Forty-three patients with AD who underwent 320-row CT renal perfusion before operation were prospectively enrolled in this study. Diagnosis of AD was confirmed by transthoracic echocardiography. Blood flow (BF) of bilateral renal perfusion was measured and analyzed. CT perfusion imaging signs of AD in relation to the type of AD, number of entry tears and the false lumen thrombus were observed and compared. The BF values of patients with type A AD were significantly lower than those of patients with type B AD (P = 0.004). No significant difference was found in the BF between different numbers of intimal tears (P = 0.288), but BF values were significantly higher in cases with a false lumen without thrombus and renal arteries arising from the true lumen than in those with thrombus (P = 0.036). The BF values measured between the true lumen, false lumen and overriding groups were different (P = 0.02), with the true lumen group having the highest. Also, the difference in BF values between true lumen and false lumen groups was statistically significant (P = 0.016), while no statistical significance was found in the other two groups (P > 0.05). The larger the size of intimal entry tears, the greater the BF values (P = 0.044). This study shows a direct correlation between renal CT perfusion changes and AD, with the size, number of intimal tears, different types of AD, different renal artery origins and false lumen thrombosis, significantly affecting the perfusion values.

  10. Video-games do not negatively impact adolescent academic performance in science, mathematics or reading.

    PubMed

    Drummond, Aaron; Sauer, James D

    2014-01-01

    Video-gaming is a common pastime among adolescents, particularly adolescent males in industrialized nations. Despite widespread suggestions that video-gaming negatively affects academic achievement, the evidence is inconclusive. We reanalyzed data from over 192,000 students in 22 countries involved in the 2009 Programme for International Student Assessment (PISA) to estimate the true effect size of frequency of videogame use on adolescent academic achievement in science, mathematics and reading. Contrary to claims that increased video-gaming can impair academic performance, differences in academic performance were negligible across the relative frequencies of videogame use. Videogame use had little impact on adolescent academic achievement.

  11. Growth and Electrical and Far-Infrared Properties of Wide Electron Wells in Semiconductors

    DTIC Science & Technology

    1994-04-15

    uniform. cmw where the barrier doping is 5 X 10" 6 cm -’, the well 300 K true electron ,profiles are shown for four dfiffer- depth calculated using Eq...in some samples. The mobility vs temperature characteristic of a where y- 0 . 7 6 . Mobility decreases from -9.4x 10’ cm 2/ sample of n-GaAs bulk doped...x 10 14 cm -3 -wt size effect scattering. Points show experimental data (for sample PBW 3 1). II I I 0 2 O 4 O 6 ITm:’au K I 14 ujanmm Hall eff I At

  12. Nano-optomechanical transducer

    DOEpatents

    Rakich, Peter T; El-Kady, Ihab F; Olsson, Roy H; Su, Mehmet Fatih; Reinke, Charles; Camacho, Ryan; Wang, Zheng; Davids, Paul

    2013-12-03

    A nano-optomechanical transducer provides ultrabroadband coherent optomechanical transduction based on Mach-wave emission that uses enhanced photon-phonon coupling efficiencies by low impedance effective phononic medium, both electrostriction and radiation pressure to boost and tailor optomechanical forces, and highly dispersive electromagnetic modes that amplify both electrostriction and radiation pressure. The optomechanical transducer provides a large operating bandwidth and high efficiency while simultaneously having a small size and minimal power consumption, enabling a host of transformative phonon and signal processing capabilities. These capabilities include optomechanical transduction via pulsed phonon emission and up-conversion, broadband stimulated phonon emission and amplification, picosecond pulsed phonon lasers, broadband phononic modulators, and ultrahigh bandwidth true time delay and signal processing technologies.

  13. Video-Games Do Not Negatively Impact Adolescent Academic Performance in Science, Mathematics or Reading

    PubMed Central

    Drummond, Aaron; Sauer, James D.

    2014-01-01

    Video-gaming is a common pastime among adolescents, particularly adolescent males in industrialized nations. Despite widespread suggestions that video-gaming negatively affects academic achievement, the evidence is inconclusive. We reanalyzed data from over 192,000 students in 22 countries involved in the 2009 Programme for International Student Assessment (PISA) to estimate the true effect size of frequency of videogame use on adolescent academic achievement in science, mathematics and reading. Contrary to claims that increased video-gaming can impair academic performance, differences in academic performance were negligible across the relative frequencies of videogame use. Videogame use had little impact on adolescent academic achievement. PMID:24699536

  14. Influence of the type of electric discharge on the properties of the produced aluminium nanoparticles

    NASA Astrophysics Data System (ADS)

    Shiyan, L. N.; Yavorovskii, N. A.; Pustovalov, A. V.; Gryaznova, E. N.

    2015-04-01

    The effect of the method of aluminum nanopowder production on the aluminum products with water reaction is described. It has been established that the interaction of aluminum nanopowder prepared by the electric wire explosion, the phase composition of the reaction products mainly consists of boehmite (AlOOH) and has a fibrous structure. Therefore, that boehmite (AlOOH) can be used for modification of polymer membranes. The modified membranes can be used as water treatment from the impurity of formed true solutions according to adsorptive mechanism, and from colloidal nanometer and micron particles according to the mechanism of mechanical separation of particles depending on sizes.

  15. The recombination mechanisms leading to amplified spontaneous emission at the true-green wavelength in CH3NH3PbBr3 perovskites

    NASA Astrophysics Data System (ADS)

    Priante, D.; Dursun, I.; Alias, M. S.; Shi, D.; Melnikov, V. A.; Ng, T. K.; Mohammed, O. F.; Bakr, O. M.; Ooi, B. S.

    2015-02-01

    We investigated the mechanisms of radiative recombination in a CH3NH3PbBr3 hybrid perovskite material using low-temperature, power-dependent (77 K), and temperature-dependent photoluminescence (PL) measurements. Two bound-excitonic radiative transitions related to grain size inhomogeneity were identified. Both transitions led to PL spectra broadening as a result of concurrent blue and red shifts of these excitonic peaks. The red-shifted bound-excitonic peak dominated at high PL excitation led to a true-green wavelength of 553 nm for CH3NH3PbBr3 powders that are encapsulated in polydimethylsiloxane. Amplified spontaneous emission was eventually achieved for an excitation threshold energy of approximately 350 μJ/cm2. Our results provide a platform for potential extension towards a true-green light-emitting device for solid-state lighting and display applications.

  16. Measuring the size of an earthquake

    USGS Publications Warehouse

    Spence, W.

    1977-01-01

    Earthquakes occur in a broad range of sizes. A rock burst in an Idaho silver mine may involve the fracture of 1 meter of rock; the 1965 Rat island earthquake in the Aleutian arc involved a 650-kilometer lenght of Earth's crust. Earthquakes can be even smaller and even larger. if an earthquake is felt or causes perceptible surface damage, then its intesnity of shaking can be subjectively estimated. But many large earthquakes occur in oceanic area or at great focal depths. These are either simply not felt or their felt pattern does not really indicate their true size. 

  17. Current modeling practice may lead to falsely high benchmark dose estimates.

    PubMed

    Ringblom, Joakim; Johanson, Gunnar; Öberg, Mattias

    2014-07-01

    Benchmark dose (BMD) modeling is increasingly used as the preferred approach to define the point-of-departure for health risk assessment of chemicals. As data are inherently variable, there is always a risk to select a model that defines a lower confidence bound of the BMD (BMDL) that, contrary to expected, exceeds the true BMD. The aim of this study was to investigate how often and under what circumstances such anomalies occur under current modeling practice. Continuous data were generated from a realistic dose-effect curve by Monte Carlo simulations using four dose groups and a set of five different dose placement scenarios, group sizes between 5 and 50 animals and coefficients of variations of 5-15%. The BMD calculations were conducted using nested exponential models, as most BMD software use nested approaches. "Non-protective" BMDLs (higher than true BMD) were frequently observed, in some scenarios reaching 80%. The phenomenon was mainly related to the selection of the non-sigmoidal exponential model (Effect=a·e(b)(·dose)). In conclusion, non-sigmoid models should be used with caution as it may underestimate the risk, illustrating that awareness of the model selection process and sound identification of the point-of-departure is vital for health risk assessment. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Missing heritability in the tails of quantitative traits? A simulation study on the impact of slightly altered true genetic models.

    PubMed

    Pütter, Carolin; Pechlivanis, Sonali; Nöthen, Markus M; Jöckel, Karl-Heinz; Wichmann, Heinz-Erich; Scherag, André

    2011-01-01

    Genome-wide association studies have identified robust associations between single nucleotide polymorphisms and complex traits. As the proportion of phenotypic variance explained is still limited for most of the traits, larger and larger meta-analyses are being conducted to detect additional associations. Here we investigate the impact of the study design and the underlying assumption about the true genetic effect in a bimodal mixture situation on the power to detect associations. We performed simulations of quantitative phenotypes analysed by standard linear regression and dichotomized case-control data sets from the extremes of the quantitative trait analysed by standard logistic regression. Using linear regression, markers with an effect in the extremes of the traits were almost undetectable, whereas analysing extremes by case-control design had superior power even for much smaller sample sizes. Two real data examples are provided to support our theoretical findings and to explore our mixture and parameter assumption. Our findings support the idea to re-analyse the available meta-analysis data sets to detect new loci in the extremes. Moreover, our investigation offers an explanation for discrepant findings when analysing quantitative traits in the general population and in the extremes. Copyright © 2011 S. Karger AG, Basel.

  19. Willingness to pay and cost of illness for changes in health capital depreciation.

    PubMed

    Ried, W

    1996-01-01

    The paper investigates the relationship between the willingness to pay and the cost of illness approach with respect to the evaluation of economic burden due to adverse health effects. The basic intertemporal framework is provided by Grossman's pure investment model, while effects on individual morbidity are taken to be generated by marginal changes in the rate of health capital depreciation. More specifically, both the simple example of purely temporary changes and the more general case of persistent variations in health capital depreciation are discussed. The analysis generates two principal findings. First, for a class of identical individuals cost as measured by the cost of illness approach is demonstrated to provide a lower bound on the true welfare cost to the individual, i.e. cost as given by the willingness to pay approach. Moreover, the cost of illness is increasing in the size of the welfare loss. Second, if one takes into account the possible heterogeneity of individuals, a clear relationship between the cost values supplied by the two approaches no longer exists. As an example, the impact of variations in either financial wealth or health capital endowment is discussed. Thus, diversity in individual type turns out to blur the link between cost of illness and the true economic cost.

  20. Psychotherapies for depression in low‐ and middle‐income countries: a meta‐analysis

    PubMed Central

    Cuijpers, Pim; Karyotaki, Eirini; Reijnders, Mirjam; Purgato, Marianna; Barbui, Corrado

    2018-01-01

    Most psychotherapies for depression have been developed in high‐income Western countries of North America, Europe and Australia. A growing number of randomized trials have examined the effects of these treatments in non‐Western countries. We conducted a meta‐analysis of these studies to examine whether these psychotherapies are effective and to compare their effects between studies from Western and non‐Western countries. We conducted systematic searches in bibliographical databases and included 253 randomized controlled trials, of which 32 were conducted in non‐Western countries. The effects of psychotherapies in non‐Western countries were large (g=1.10; 95% CI: 0.91‐1.30), with high heterogeneity (I2=90; 95% CI: 87‐92). After adjustment for publication bias, the effect size dropped to g=0.73 (95% CI: 0.51‐0.96). Subgroup analyses did not indicate that adaptation to the local situation was associated with the effect size. Comparisons with the studies in Western countries showed that the effects of the therapies were significantly larger in non‐Western countries, also after adjusting for characteristics of the participants, the treatments and the studies. These larger effect sizes in non‐Western countries may reflect true differences indicating that therapies are indeed more effective; or may be explained by the care‐as‐usual control conditions in non‐Western countries, often indicating that no care was available; or may be the result of the relative low quality of many trials in the field. This study suggests that psychotherapies that were developed in Western countries may or may not be more effective in non‐Western countries, but they are probably no less effective and can therefore also be used in these latter countries. PMID:29352530

  1. The impact of obesity on skeletal muscle strength and structure through adolescence to old age.

    PubMed

    Tomlinson, D J; Erskine, R M; Morse, C I; Winwood, K; Onambélé-Pearson, Gladys

    2016-06-01

    Obesity is associated with functional limitations in muscle performance and increased likelihood of developing a functional disability such as mobility, strength, postural and dynamic balance limitations. The consensus is that obese individuals, regardless of age, have a greater absolute maximum muscle strength compared to non-obese persons, suggesting that increased adiposity acts as a chronic overload stimulus on the antigravity muscles (e.g., quadriceps and calf), thus increasing muscle size and strength. However, when maximum muscular strength is normalised to body mass, obese individuals appear weaker. This relative weakness may be caused by reduced mobility, neural adaptations and changes in muscle morphology. Discrepancies in the literature remain for maximal strength normalised to muscle mass (muscle quality) and can potentially be explained through accounting for the measurement protocol contributing to muscle strength capacity that need to be explored in more depth such as antagonist muscle co-activation, muscle architecture, a criterion valid measurement of muscle size and an accurate measurement of physical activity levels. Current evidence demonstrating the effect of obesity on muscle quality is limited. These factors not being recorded in some of the existing literature suggest a potential underestimation of muscle force either in terms of absolute force production or relative to muscle mass; thus the true effect of obesity upon skeletal muscle size, structure and function, including any interactions with ageing effects, remains to be elucidated.

  2. Plastic strain and grain size effects in the surface roughening of a model aluminum alloy

    NASA Astrophysics Data System (ADS)

    Moore, Eric Joseph

    To address issues surrounding improved automotive fuel economy, an experiment was designed to study the effect of uniaxial plastic tensile deformation on surface roughness and on slip and grain rotation. Electron backscatter diffraction (EBSD) and scanning laser confocal microscopy (SLCM) were used to track grain size, crystallographic texture, and surface topography as a function of incremental true strain for a coarse-grained binary alloy that is a model for AA5xxx series aluminum alloys. One-millimeter thick sheets were heat treated at 425°C to remove previous rolling texture and to grow grains to sizes in the range ˜10-8000 mum. At five different strain levels, 13 sample regions, containing 43 grains, were identified in both EBSD and SLCM micrographs, and crystallographic texture and surface roughness were measured. After heat treatment, a strong cube texture matrix emerged, with bands of generally non-cube grains embedded parallel to the rolling direction (RD). To characterize roughness, height profiles from SLCM micrographs were extracted and a filtered Fourier transform approach was used to separate the profiles into intergranular (long wavelength) and intragranular (short wavelength) signatures. The commonly-used rms roughness parameter (Rq) characterized intragranular results. Two important parameters assess intergranular results in two grain size regimes: surface tilt angle (Deltatheta) and surface height discontinuity (DeltazH) between neighboring grains at a boundary. In general, the magnitude of Rq and Deltatheta increase monotonically with strain and indicate that intergranular roughness is the major contributor to overall surface roughness for true strains up to epsilon = 0.12. Surface height discontinuity DeltazH is defined due to exceptions in surface tilt angle analyses. The range of observed Deltatheta= 1-10° are consistent with the observed 3-12° rotation of individual grains as measured with EBSD. For some grain boundaries with Deltatheta< 4°, the surface height discontinuity DeltazH characterizes the response of adjacent grains in which one or more are large (˜1000-2000 mum), making a 3-12° rotation of the grain highly unlikely. This can be understood by postulating that the energy associated with rotating large grains would exceed the energy to shear along the boundary. Slip and grain boundary shearing are the active mechanisms in these instances.

  3. Early Breakdown of Area-Law Entanglement at the Many-Body Delocalization Transition

    NASA Astrophysics Data System (ADS)

    Devakul, Trithep; Singh, Rajiv R. P.

    2015-10-01

    We introduce the numerical linked cluster expansion as a controlled numerical tool for the study of the many-body localization transition in a disordered system with continuous nonperturbative disorder. Our approach works directly in the thermodynamic limit, in any spatial dimension, and does not rely on any finite size scaling procedure. We study the onset of many-body delocalization through the breakdown of area-law entanglement in a generic many-body eigenstate. By looking for initial signs of an instability of the localized phase, we obtain a value for the critical disorder, which we believe should be a lower bound for the true value, that is higher than current best estimates from finite size studies. This implies that most current methods tend to overestimate the extent of the localized phase due to finite size effects making the localized phase appear stable at small length scales. We also study the mobility edge in these systems as a function of energy density, and we find that our conclusion is the same at all examined energies.

  4. Population Demographic History Can Cause the Appearance of Recombination Hotspots

    PubMed Central

    Johnston, Henry R.; Cutler, David J.

    2012-01-01

    Although the prevailing view among geneticists suggests that recombination hotspots exist ubiquitously across the human genome, there is only limited experimental evidence from a few genomic regions to support the generality of this claim. A small number of true recombination hotspots are well supported experimentally, but the vast majority of hotspots have been identified on the basis of population genetic inferences from the patterns of linkage disequilibrium (LD) seen in the human population. These inferences are made assuming a particular model of human history, and one of the assumptions of that model is that the effective population size of humans has remained constant throughout our history. Our results show that relaxation of the constant population size assumption can create LD and variation patterns that are qualitatively and quantitatively similar to human populations without any need to invoke localized hotspots of recombination. In other words, apparent recombination hotspots could be an artifact of variable population size over time. Several lines of evidence suggest that the vast majority of hotspots identified on the basis of LD information are unlikely to have elevated recombination rates. PMID:22560089

  5. Structures observed on the spot radiance fields during the FIRE experiment

    NASA Technical Reports Server (NTRS)

    Seze, Genevieve; Smith, Leonard; Desbois, Michel

    1990-01-01

    Three Spot images taken during the FIRE experiment on stratocumulus are analyzed. From this high resolution data detailed observations of the true cloud radiance field may be made. The structure and inhomogeneity of these radiance fields hold important implications for the radiation budget, while the fine scale structure in radiance field provides information on cloud dynamics. Wieliki and Welsh, and Parker et al., have quantified the inhomogeneities of the cumulus clouds through a careful examination of the distribution of cloud (and hole) size as functions of an effective cloud diameter and radiance threshold. Cahalan (1988) has compared for different cloud types of (stratocumulus, fair weather cumulus, convective clouds in the ITCZ) the distributions of clouds (and holes) sizes, the relation between the size and the perimeter of these clouds (and holes), and examining the possibility of scale invariance. These results are extended from LANDSAT resolution (57 m and 30 m) to the Spot resolution (10 m) resolution in the case of boundary layer clouds. Particular emphasis is placed on the statistics of zones of high and low reflectivity as a function of a threshold reflectivity.

  6. Biodiversity simultaneously enhances the production and stability of community biomass, but the effects are independent.

    PubMed

    Cardinale, Bradley J; Gross, Kevin; Fritschie, Keith; Flombaum, Pedro; Fox, Jeremy W; Rixen, Christian; van Ruijven, Jasper; Reich, Peter B; Scherer-Lorenzen, Michael; Wilsey, Brian J

    2013-08-01

    To predict the ecological consequences of biodiversity loss, researchers have spent much time and effort quantifying how biological variation affects the magnitude and stability of ecological processes that underlie the functioning of ecosystems. Here we add to this work by looking at how biodiversity jointly impacts two aspects of ecosystem functioning at once: (1) the production of biomass at any single point in time (biomass/area or biomass/ volume), and (2) the stability of biomass production through time (the CV of changes in total community biomass through time). While it is often assumed that biodiversity simultaneously enhances both of these aspects of ecosystem functioning, the joint distribution of data describing how species richness regulates productivity and stability has yet to be quantified. Furthermore, analyses have yet to examine how diversity effects on production covary with diversity effects on stability. To overcome these two gaps, we reanalyzed the data from 34 experiments that have manipulated the richness of terrestrial plants or aquatic algae and measured how this aspect of biodiversity affects community biomass at multiple time points. Our reanalysis confirms that biodiversity does indeed simultaneously enhance both the production and stability of biomass in experimental systems, and this is broadly true for terrestrial and aquatic primary producers. However, the strength of diversity effects on biomass production is independent of diversity effects on temporal stability. The independence of effect sizes leads to two important conclusions. First, while it may be generally true that biodiversity enhances both productivity and stability, it is also true that the highest levels of productivity in a diverse community are not associated with the highest levels of stability. Thus, on average, diversity does not maximize the various aspects of ecosystem functioning we might wish to achieve in conservation and management. Second, knowing how biodiversity affects productivity gives no information about how diversity affects stability (or vice versa). Therefore, to predict the ecological changes that occur in ecosystems after extinction, we will need to develop separate mechanistic models for each independent aspect of ecosystem functioning.

  7. A note on power and sample size calculations for the Kruskal-Wallis test for ordered categorical data.

    PubMed

    Fan, Chunpeng; Zhang, Donghui

    2012-01-01

    Although the Kruskal-Wallis test has been widely used to analyze ordered categorical data, power and sample size methods for this test have been investigated to a much lesser extent when the underlying multinomial distributions are unknown. This article generalizes the power and sample size procedures proposed by Fan et al. ( 2011 ) for continuous data to ordered categorical data, when estimates from a pilot study are used in the place of knowledge of the true underlying distribution. Simulations show that the proposed power and sample size formulas perform well. A myelin oligodendrocyte glycoprotein (MOG) induced experimental autoimmunce encephalomyelitis (EAE) mouse study is used to demonstrate the application of the methods.

  8. Dawn First Glimpse of Vesta -- Processed

    NASA Image and Video Library

    2011-05-11

    This image, processed to show the true size of the giant asteroid Vesta, shows Vesta in front of a spectacular background of stars. It was obtained by the framing camera aboard NASA Dawn spacecraft on May 3, 2011, from a distance of about 750,000 miles.

  9. Effect of Medialization Thyroplasty on Glottic Airway Anatomy: Cadaver Model.

    PubMed

    Shinghal, Tulika; Anderson, Jennifer; Chung, Janet; Hong, Aaron; Bharatha, Aditya

    2016-11-01

    The purpose of this study was to investigate the change in airway dimensions after medialization thyroplasty (MT) using a cadaveric model. Helical computerized tomography (CT) was performed before and after placement of a silastic block in human larynges to investigate the effect on airway anatomy at the level of the glottis. Tissue density (TD) of the medialized vocal fold (VF) was documented to understand the effect on tissue displacement. This is a cadaveric study. Thirteen human cadaveric larynges underwent fine-cut CT scan before and after MT was performed using carved blocks in two sizes (small block and large block [LB]). Clientstream software was used to measure laryngeal dimensions: intraglottic volume (IGV), cross-sectional area (CSA), posterior-glottic diameter (PGD), VF density (in Hounsfield units [HUs]), and anterior-posterior diameter (APD). Eight sequential axial sections 0.625 mm cuts) at the level of the true VFs were analyzed. There was a significant decrease between the three conditions for IGV (P < 0.0001) and CSA (P < 0.0001). TD of the VF was increased after MT as indicated by HU increase (P = 0.0003). APD was not significantly changed. PGD was significantly different between the no block to LB placement (P = 0.0012). MT significantly changes the IGV and CSA at the level of the glottis. Density in the true VF was significantly increased. These findings have important implications for understanding volumetric effects of MT. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  10. Implications of Weak Link Effects on Thermal Characteristics of Transition-Edge Sensors

    NASA Technical Reports Server (NTRS)

    Bailey, Catherine

    2011-01-01

    Weak link behavior in transition-edge sensor (TES) devices creates the need for a more careful characterization of a device's thermal characteristics through its transition. This is particularly true for small TESs where a small change in the measurement current results in large changes in temperature. A highly current-dependent transition shape makes accurate thermal characterization of the TES parameters through the transition challenging. To accurately interpret measurements, especially complex impedance, it is crucial to know the temperature-dependent thermal conductance, G(T), and heat capacity, C(T), at each point through the transition. We will present data illustrating these effects and discuss how we overcome the challenges that are present in accurately determining G and T from IV curves. We will also show how these weak link effects vary with TES size.

  11. Do less populous countries receive more development assistance for health per capita? Longitudinal evidence for 143 countries, 1990-2014.

    PubMed

    Martinsen, Lene; Ottersen, Trygve; Dieleman, Joseph L; Hessel, Philipp; Kinge, Jonas Minet; Skirbekk, Vegard

    2018-01-01

    Per capita allocation of overall development assistance has been shown to be biased towards countries with lower population size, meaning funders tend to provide proportionally less development assistance to countries with large populations. Individuals that happen to be part of large populations therefore tend to receive less assistance. However, no study has investigated whether this is also true regarding development assistance for health. We examined whether this so-called 'small-country bias' exists in the health aid sector. We analysed the effect of a country's population size on the receipt of development assistance for health per capita (in 2015 US$) among 143 countries over the period 1990-2014. Explanatory variables shown to be associated with receipt of development assistance for health were included: gross domestic product per capita, burden of disease, under-5 mortality rate, maternal mortality ratio, vaccination coverage (diphtheria, tetanus and pertussis) and fertility rate. We used the within-between regression analysis, popularised by Mundluck, as well as a number of robustness tests, including ordinary least squares, random-effects and fixed-effects regressions. Our results suggest there exists significant negative effect of population size on the amount of development assistance for health per capita countries received. According to the within-between estimator, a 1% larger population size is associated with a 0.4% lower per capita development assistance for health between countries (-0.37, 95% CI -0.45 to -0.28), and 2.3% lower per capita development assistance for health within countries (-2.29, 95% CI -3.86 to -0.72). Our findings support the hypothesis that small-country bias exists within international health aid, as has been previously documented for aid in general. In a rapidly changing landscape of global health and development, the inclusion of population size in allocation decisions should be challenged on the basis of equitable access to healthcare and health aid effectiveness.

  12. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    PubMed

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  13. Low statistical power in biomedical science: a review of three human research domains.

    PubMed

    Dumas-Mallet, Estelle; Button, Katherine S; Boraud, Thomas; Gonon, Francois; Munafò, Marcus R

    2017-02-01

    Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0-10% or 11-20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation.

  14. Low statistical power in biomedical science: a review of three human research domains

    PubMed Central

    Dumas-Mallet, Estelle; Button, Katherine S.; Boraud, Thomas; Gonon, Francois

    2017-01-01

    Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0–10% or 11–20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation. PMID:28386409

  15. Application of linear multifrequency-grey acceleration to preconditioned Krylov iterations for thermal radiation transport

    DOE PAGES

    Till, Andrew T.; Warsa, James S.; Morel, Jim E.

    2018-06-15

    The thermal radiative transfer (TRT) equations comprise a radiation equation coupled to the material internal energy equation. Linearization of these equations produces effective, thermally-redistributed scattering through absorption-reemission. In this paper, we investigate the effectiveness and efficiency of Linear-Multi-Frequency-Grey (LMFG) acceleration that has been reformulated for use as a preconditioner to Krylov iterative solution methods. We introduce two general frameworks, the scalar flux formulation (SFF) and the absorption rate formulation (ARF), and investigate their iterative properties in the absence and presence of true scattering. SFF has a group-dependent state size but may be formulated without inner iterations in the presence ofmore » scattering, while ARF has a group-independent state size but requires inner iterations when scattering is present. We compare and evaluate the computational cost and efficiency of LMFG applied to these two formulations using a direct solver for the preconditioners. Finally, this work is novel because the use of LMFG for the radiation transport equation, in conjunction with Krylov methods, involves special considerations not required for radiation diffusion.« less

  16. The morphology of human hyoid bone in relation to sex, age and body proportions.

    PubMed

    Urbanová, P; Hejna, P; Zátopková, L; Šafr, M

    2013-06-01

    Morphological aspects of the human hyoid bone are, like many other skeletal elements in human body, greatly affected by individual's sex, age and body proportions. Still, the known sex-dependent bimodality of a number of body size characteristics overshadows the true within-group patterns. Given the ambiguity of the causal effects of age, sex and body size upon hyoid morphology the present study puts the relationship between shape of human hyoid bone and body proportions (height and weight) under scrutiny of a morphological study. Using 211 hyoid bones and landmark-based methods of geometric morphometrics, it was shown that the size of hyoid bones correlated positively with measured body dimensions but showed no correlation if the individual's sex was controlled for. For shape variables, our results revealed that hyoid morphology is clearly related to body size as expressed in terms of the height and weight. Yet, the hyoid shape was shown to result primarily from the sex-related bimodal distribution of studied body size descriptors which, in the case of the height-dependent model, exhibited opposite trends for males and females. Apart from the global hyoid shape given by spatial arrangements of the greater horns, body size dependency was translated into size and position of the hyoid body. None of the body size characters had any impact on hyoid asymmetry. Ultimately, sexually dimorphic variation was revealed for age-dependent changes in both size and shape of hyoid bones as male hyoids tend to be more susceptible to modifications with age than female bones. Copyright © 2013 Elsevier GmbH. All rights reserved.

  17. Stabilizing Selection, Purifying Selection, and Mutational Bias in Finite Populations

    PubMed Central

    Charlesworth, Brian

    2013-01-01

    Genomic traits such as codon usage and the lengths of noncoding sequences may be subject to stabilizing selection rather than purifying selection. Mutations affecting these traits are often biased in one direction. To investigate the potential role of stabilizing selection on genomic traits, the effects of mutational bias on the equilibrium value of a trait under stabilizing selection in a finite population were investigated, using two different mutational models. Numerical results were generated using a matrix method for calculating the probability distribution of variant frequencies at sites affecting the trait, as well as by Monte Carlo simulations. Analytical approximations were also derived, which provided useful insights into the numerical results. A novel conclusion is that the scaled intensity of selection acting on individual variants is nearly independent of the effective population size over a wide range of parameter space and is strongly determined by the logarithm of the mutational bias parameter. This is true even when there is a very small departure of the mean from the optimum, as is usually the case. This implies that studies of the frequency spectra of DNA sequence variants may be unable to distinguish between stabilizing and purifying selection. A similar investigation of purifying selection against deleterious mutations was also carried out. Contrary to previous suggestions, the scaled intensity of purifying selection with synergistic fitness effects is sensitive to population size, which is inconsistent with the general lack of sensitivity of codon usage to effective population size. PMID:23709636

  18. Detecting and describing preventive intervention effects in a universal school-based randomized trial targeting delinquent and violent behavior.

    PubMed

    Stoolmiller, M; Eddy, J M; Reid, J B

    2000-04-01

    This study examined theoretical, methodological, and statistical problems involved in evaluating the outcome of aggression on the playground for a universal preventive intervention for conduct disorder. Moderately aggressive children were hypothesized most likely to benefit. Aggression was measured on the playground using observers blind to the group status of the children. Behavior was micro-coded in real time to minimize potential expectancy biases. The effectiveness of the intervention was strongly related to initial levels of aggressiveness. The most aggressive children improved the most. Models that incorporated corrections for low reliability (the ratio of variance due to true time-stable individual differences to total variance) and censoring (a floor effect in the rate data due to short periods of observation) obtained effect sizes 5 times larger than models without such corrections with respect to children who were initially 2 SDs above the mean on aggressiveness.

  19. Why publishing everything is more effective than selective publishing of statistically significant results.

    PubMed

    van Assen, Marcel A L M; van Aert, Robbie C M; Nuijten, Michèle B; Wicherts, Jelte M

    2014-01-01

    De Winter and Happee examined whether science based on selective publishing of significant results may be effective in accurate estimation of population effects, and whether this is even more effective than a science in which all results are published (i.e., a science without publication bias). Based on their simulation study they concluded that "selective publishing yields a more accurate meta-analytic estimation of the true effect than publishing everything, (and that) publishing nonreplicable results while placing null results in the file drawer can be beneficial for the scientific collective" (p.4). Using their scenario with a small to medium population effect size, we show that publishing everything is more effective for the scientific collective than selective publishing of significant results. Additionally, we examined a scenario with a null effect, which provides a more dramatic illustration of the superiority of publishing everything over selective publishing. Publishing everything is more effective than only reporting significant outcomes.

  20. Why Publishing Everything Is More Effective than Selective Publishing of Statistically Significant Results

    PubMed Central

    van Assen, Marcel A. L. M.; van Aert, Robbie C. M.; Nuijten, Michèle B.; Wicherts, Jelte M.

    2014-01-01

    Background De Winter and Happee [1] examined whether science based on selective publishing of significant results may be effective in accurate estimation of population effects, and whether this is even more effective than a science in which all results are published (i.e., a science without publication bias). Based on their simulation study they concluded that “selective publishing yields a more accurate meta-analytic estimation of the true effect than publishing everything, (and that) publishing nonreplicable results while placing null results in the file drawer can be beneficial for the scientific collective” (p.4). Methods and Findings Using their scenario with a small to medium population effect size, we show that publishing everything is more effective for the scientific collective than selective publishing of significant results. Additionally, we examined a scenario with a null effect, which provides a more dramatic illustration of the superiority of publishing everything over selective publishing. Conclusion Publishing everything is more effective than only reporting significant outcomes. PMID:24465448

  1. SU-E-J-188: Theoretical Estimation of Margin Necessary for Markerless Motion Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, R; Block, A; Harkenrider, M

    2015-06-15

    Purpose: To estimate the margin necessary to adequately cover the target using markerless motion tracking (MMT) of lung lesions given the uncertainty in tracking and the size of the target. Methods: Simulations were developed in Matlab to determine the effect of tumor size and tracking uncertainty on the margin necessary to achieve adequate coverage of the target. For simplicity, the lung tumor was approximated by a circle on a 2D radiograph. The tumor was varied in size from a diameter of 0.1 − 30 mm in increments of 0.1 mm. From our previous studies using dual energy markerless motion tracking,more » we estimated tracking uncertainties in x and y to have a standard deviation of 2 mm. A Gaussian was used to simulate the deviation between the tracked location and true target location. For each size tumor, 100,000 deviations were randomly generated, the margin necessary to achieve at least 95% coverage 95% of the time was recorded. Additional simulations were run for varying uncertainties to demonstrate the effect of the tracking accuracy on the margin size. Results: The simulations showed an inverse relationship between tumor size and margin necessary to achieve 95% coverage 95% of the time using the MMT technique. The margin decreased exponentially with target size. An increase in tracking accuracy expectedly showed a decrease in margin size as well. Conclusion: In our clinic a 5 mm expansion of the internal target volume (ITV) is used to define the planning target volume (PTV). These simulations show that for tracking accuracies in x and y better than 2 mm, the margin required is less than 5 mm. This simple simulation can provide physicians with a guideline estimation for the margin necessary for use of MMT clinically based on the accuracy of their tracking and the size of the tumor.« less

  2. The importance of grain size to mantle dynamics and seismological observations

    NASA Astrophysics Data System (ADS)

    Gassmoeller, R.; Dannberg, J.; Eilon, Z.; Faul, U.; Moulik, P.; Myhill, R.

    2017-12-01

    Grain size plays a key role in controlling the mechanical properties of the Earth's mantle, affecting both long-timescale flow patterns and anelasticity on the timescales of seismic wave propagation. However, dynamic models of Earth's convecting mantle usually implement flow laws with constant grain size, stress-independent viscosity, and a limited treatment of changes in mineral assemblage. We study grain size evolution, its interplay with stress and strain rate in the convecting mantle, and its influence on seismic velocities and attenuation. Our geodynamic models include the simultaneous and competing effects of dynamic recrystallization resulting from dislocation creep, grain growth in multiphase assemblages, and recrystallization at phase transitions. They show that grain size evolution drastically affects the dynamics of mantle convection and the rheology of the mantle, leading to lateral viscosity variations of six orders of magnitude due to grain size alone, and controlling the shape of upwellings and downwellings. Using laboratory-derived scaling relationships, we convert model output to seismologically-observable parameters (velocity, attenuation) facilitating comparison to Earth structure. Reproducing the fundamental features of the Earth's attenuation profile requires reduced activation volume and relaxed shear moduli in the lower mantle compared to the upper mantle, in agreement with geodynamic constraints. Faster lower mantle grain growth yields best fit to seismic observations, consistent with our re-examination of high pressure grain growth parameters. We also show that ignoring grain size in interpretations of seismic anomalies may underestimate the Earth's true temperature variations.

  3. Correction of the post -- necking true stress -- strain data using instrumented nanoindentation

    NASA Astrophysics Data System (ADS)

    Romero Fonseca, Ivan Dario

    The study of large plastic deformations has been the focus of numerous studies particularly in the metal forming processes and fracture mechanics fields. A good understanding of the plastic flow properties of metallic alloys and the true stresses and true strains induced during plastic deformation is crucial to optimize the aforementioned processes, and to predict ductile failure in fracture mechanics analyzes. Knowledge of stresses and strains is extracted from the true stress-strain curve of the material from the uniaxial tensile test. In addition, stress triaxiality is manifested by the neck developed during the last stage of a tensile test performed on a ductile material. This necking phenomenon is the factor responsible for deviating from uniaxial state into a triaxial one, then, providing an inaccurate description of the material's behavior after the onset of necking. The research of this dissertation is aimed at the development of a correction method for the nonuniform plastic deformation (post-necking) portion of the true stress-strain curve. The correction proposed is based on the well-known relationship between hardness and flow (yield) stress, except that instrumented nanoindentation hardness is utilized rather than conventional macro or micro hardness. Three metals with different combinations of strain hardening behavior and crystal structure were subjected to quasi-static tensile tests: power-law strain hardening low carbon G10180 steel (BCC) and electrolytic tough pitch copper C11000 (FCC), and linear strain hardening austenitic stainless steel S30400 (FCC). Nanoindentation hardness values, measured on the broken tensile specimen, were converted into flow stress values by means of the constraint factor C from Tabor's, the representative plastic strainepsilonr and the post-test true plastic strains measured. Micro Vickers hardness testing was carried out on the sample as well. The constraint factors were 5.5, 4.5 and 4.5 and the representative plastic strains were 0.028, 0.062 and 0.061 for G101800, C11000 and S30400 respectively. The established corrected curves relating post-necking flow stress to true plastic strain turned out to be well represented by a power-law function. Experimental results dictated that a unique single value for C and for epsilonr is not appropriate to describe materials with different plastic behaviors. Therefore, Tabor's equation, along with the representative plastic strain concept, has been misused in the past. The studied materials exhibited different nanohardness and plastic strain distributions due to their inherently distinct elasto-plastic response. The proposed post-necking correction separates out the effect of triaxiality on the uniaxial true stress-strain curve provided that the nanohardness-flow stress relationship is based on uniaxial values of stress. Some type of size effect, due to the microvoids at the tip of the neck, influenced nanohardness measurements. The instrumented nanoindentation technique proved to be a very suitable method to probe elasto-plastic properties of materials such as nanohardness, elastic modulus, and quasi-static strain rate sensitivity among others. Care should be taken when converting nanohardness to Vickers and vice versa due to their different area definition used. Nanohardness to Vickers ratio oscillated between 1.01 and 1.17.

  4. Pseudo progression identification of glioblastoma with dictionary learning.

    PubMed

    Zhang, Jian; Yu, Hengyong; Qian, Xiaohua; Liu, Keqin; Tan, Hua; Yang, Tielin; Wang, Maode; Li, King Chuen; Chan, Michael D; Debinski, Waldemar; Paulsson, Anna; Wang, Ge; Zhou, Xiaobo

    2016-06-01

    Although the use of temozolomide in chemoradiotherapy is effective, the challenging clinical problem of pseudo progression has been raised in brain tumor treatment. This study aims to distinguish pseudo progression from true progression. Between 2000 and 2012, a total of 161 patients with glioblastoma multiforme (GBM) were treated with chemoradiotherapy at our hospital. Among the patients, 79 had their diffusion tensor imaging (DTI) data acquired at the earliest diagnosed date of pseudo progression or true progression, and 23 had both DTI data and genomic data. Clinical records of all patients were kept in good condition. Volumetric fractional anisotropy (FA) images obtained from the DTI data were decomposed into a sequence of sparse representations. Then, a feature selection algorithm was applied to extract the critical features from the feature matrix to reduce the size of the feature matrix and to improve the classification accuracy. The proposed approach was validated using the 79 samples with clinical DTI data. Satisfactory results were obtained under different experimental conditions. The area under the receiver operating characteristic (ROC) curve (AUC) was 0.87 for a given dictionary with 1024 atoms. For the subgroup of 23 samples, genomics data analysis was also performed. Results implied further perspective on pseudo progression classification. The proposed method can determine pseudo progression and true progression with improved accuracy. Laboring segmentation is no longer necessary because this skillfully designed method is not sensitive to tumor location. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Making Do with Less: Calibrating a True Travel Demand Model Without Traditional Survey Data

    DOT National Transportation Integrated Search

    1997-01-01

    For many small and medium-sized cities, funding a full Home-Interview survey, with costs as high as $100 per household, is not feasible. Traditionally, such a survey has provided the basic foundation for developing a truly useful travel demand model....

  6. A blind human expert echolocator shows size constancy for objects perceived by echoes.

    PubMed

    Milne, Jennifer L; Anello, Mimma; Goodale, Melvyn A; Thaler, Lore

    2015-01-01

    Some blind humans make clicking noises with their mouth and use the reflected echoes to perceive objects and surfaces. This technique can operate as a crude substitute for vision, allowing human echolocators to perceive silent, distal objects. Here, we tested if echolocation would, like vision, show size constancy. To investigate this, we asked a blind expert echolocator (EE) to echolocate objects of different physical sizes presented at different distances. The EE consistently identified the true physical size of the objects independent of distance. In contrast, blind and blindfolded sighted controls did not show size constancy, even when encouraged to use mouth clicks, claps, or other signals. These findings suggest that size constancy is not a purely visual phenomenon, but that it can operate via an auditory-based substitute for vision, such as human echolocation.

  7. Nonlethal Effects of Nematode Infection on Sirex noctilio and Sirex nigricornis (Hymenoptera: Siricidae).

    PubMed

    Haavik, Laurel J; Allison, Jeremy D; MacQuarrie, Chris J K; Nott, Reginald W; Ryan, Kathleen; de Groot, Peter; Turgeon, Jean J

    2016-04-01

    A nonnative woodwasp, Sirex noctilio F., has established in pine forests in eastern North America. To facilitate prediction of the full range of impacts S. noctilio could have as it continues to spread in North American forest ecosystems, we studied the effects of infection by a nonsterilizing parasitic nematode on S. noctilio size, fecundity, and flight capacity and on the native woodwasp, S. nigricornis, size and fecundity. We also developed predictive models relating size to fecundity for both species. On average, S. noctilio (3.18 ± 0.05 mm) were larger than S. nigricornis (2.19 ± 0.04 mm). For wasps of similar size, S. nigricornis was more fecund. Nematode infection negatively affected potential fecundity by a mean difference of 36 and 49 eggs in S. noctilio and S. nigricornis, respectively. Nematode-infected males of S. noctilio, however, were larger than uninfected individuals. Nematode infection showed inconsistent results on mean speed and total distance flown by S. noctilio males and females. Nematode infection did not affect total distance flown by females, and so is unlikley to have a direct, or strong influence on S. noctilio flight capacity. Models developed to predict fecundity of Sirex spp. from body size, based on the close relationship between pronotum width and potential fecundity for both species (R(2) ≥ 0.69), had low measures of error when compared with true values of fecundity (± 25-26 eggs). © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses.

    PubMed

    Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene

    2015-05-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.

  9. Cost effectiveness analysis of screening for sight threatening diabetic eye disease

    PubMed Central

    James, Marilyn; Turner, David A; Broadbent, Deborah M; Vora, Jiten; Harding, Simon P

    2000-01-01

    Objective To measure the cost effectiveness of systematic photographic screening for sight threatening diabetic eye disease compared with existing practice. Design Cost effectiveness analysis Setting Liverpool. Subjects A target population of 5000 diabetic patients invited for screening. Main outcome measures Cost effectiveness (cost per true positive) of systematic and opportunistic programmes; incremental cost effectiveness of replacing opportunistic with systematic screening. Results Baseline prevalence of sight threatening eye disease was 14.1%. The cost effectiveness of the systematic programme was £209 (sensitivity 89%, specificity 86%, compliance 80%, annual cost £104 996) and of the opportunistic programme was £289 (combined sensitivity 63%, specificity 92%, compliance 78%, annual cost £99 981). The incremental cost effectiveness of completely replacing the opportunistic programme was £32. Absolute values of cost effectiveness were highly sensitive to varying prevalence, sensitivity and specificity, compliance, and programme size. Conclusion Replacing existing programmes with systematic screening for diabetic eye disease is justified. PMID:10856062

  10. Particle size distribution: A key factor in estimating powder dustiness.

    PubMed

    López Lilao, Ana; Sanfélix Forner, Vicenta; Mallol Gasch, Gustavo; Monfort Gimeno, Eliseo

    2017-12-01

    A wide variety of raw materials, involving more than 20 samples of quartzes, feldspars, nephelines, carbonates, dolomites, sands, zircons, and alumina, were selected and characterised. Dustiness, i.e., a materials' tendency to generate dust on handling, was determined using the continuous drop method. These raw materials were selected to encompass a wide range of particle sizes (1.6-294 µm) and true densities (2650-4680 kg/m 3 ). The dustiness of the raw materials, i.e., their tendency to generate dust on handling, was determined using the continuous drop method. The influence of some key material parameters (particle size distribution, flowability, and specific surface area) on dustiness was assessed. In this regard, dustiness was found to be significantly affected by particle size distribution. Data analysis enabled development of a model for predicting the dustiness of the studied materials, assuming that dustiness depended on the particle fraction susceptible to emission and on the bulk material's susceptibility to release these particles. On the one hand, the developed model allows the dustiness mechanisms to be better understood. In this regard, it may be noted that relative emission increased with mean particle size. However, this did not necessarily imply that dustiness did, because dustiness also depended on the fraction of particles susceptible to be emitted. On the other hand, the developed model enables dustiness to be estimated using just the particle size distribution data. The quality of the fits was quite good and the fact that only particle size distribution data are needed facilitates industrial application, since these data are usually known by raw materials managers, thus making additional tests unnecessary. This model may therefore be deemed a key tool in drawing up efficient preventive and/or corrective measures to reduce dust emissions during bulk powder processing, both inside and outside industrial facilities. It is recommended, however, to use the developed model only if particle size, true density, moisture content, and shape lie within the studied ranges.

  11. [Study of the reliability in one dimensional size measurement with digital slit lamp microscope].

    PubMed

    Wang, Tao; Qi, Chaoxiu; Li, Qigen; Dong, Lijie; Yang, Jiezheng

    2010-11-01

    To study the reliability of digital slit lamp microscope as a tool for quantitative analysis in one dimensional size measurement. Three single-blinded observers acquired and repeatedly measured the images with a size of 4.00 mm and 10.00 mm on the vernier caliper, which simulatated the human eye pupil and cornea diameter under China-made digital slit lamp microscope in the objective magnification of 4 times, 10 times, 16 times, 25 times, 40 times and 4 times, 10 times, 16 times, respectively. The correctness and precision of measurement were compared. The images with 4 mm size were measured by three investigators and the average values were located between 3.98 to 4.06. For the images with 10.00 mm size, the average values fell within 10.00 ~ 10.04. Measurement results of 4.00 mm images showed, except A4, B25, C16 and C25, significant difference was noted between the measured value and the true value. Regarding measurement results of 10.00 mm iamges indicated, except A10, statistical significance was found between the measured value and the true value. In terms of comparing the results of the same size measured at different magnifications by the same investigator, except for investigators A's measurements of 10.00 mm dimension, the measurement results by all the remaining investigators presented statistical significance at different magnifications. Compared measurements of the same size with different magnifications, measurements of 4.00 mm in 4-fold magnification had no significant difference among the investigators', the remaining results were statistically significant. The coefficient of variation of all measurement results were less than 5%; as magnification increased, the coefficient of variation decreased. The measurement of digital slit lamp microscope in one-dimensional size has good reliability,and should be performed for reliability analysis before used for quantitative analysis to reduce systematic errors.

  12. Shape and Size of Microfine Aggregates: X-ray Microcomputed Tomgraphy vs. Laser Diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erdogan,S.; Garboczi, E.; Fowler, D.

    Microfine rock aggregates, formed naturally or in a crushing process, pass a No. 200 ASTM sieve, so have at least two orthogonal principal dimensions less than 75 {mu}m, the sieve opening size. In this paper, for the first time, we capture true 3-D shape and size data of several different types of microfine aggregates, using X-ray microcomputed tomography ({mu}CT) with a voxel size of 2 {mu}m. This information is used to generate shape analyses of various kinds. Particle size distributions are also generated from the {mu}CT data and quantitatively compared to the results of laser diffraction, which is the leadingmore » method for measuring particle size distributions of sub-millimeter size particles. By taking into account the actual particle shape, the differences between {mu}CT and laser diffraction can be qualitatively explained.« less

  13. Scaling laws and technology development strategies for biorefineries and bioenergy plants.

    PubMed

    Jack, Michael W

    2009-12-01

    The economies of scale of larger biorefineries or bioenergy plants compete with the diseconomies of scale of transporting geographically distributed biomass to a central location. This results in an optimum plant size that depends on the scaling parameters of the two contributions. This is a fundamental aspect of biorefineries and bioenergy plants and has important consequences for technology development as "bigger is better" is not necessarily true. In this paper we explore the consequences of these scaling effects via a simplified model of biomass transportation and plant costs. Analysis of this model suggests that there is a need for much more sophisticated technology development strategies to exploit the consequences of these scaling effects. We suggest three potential strategies in terms of the scaling parameters of the system.

  14. Remediation of metal-contaminated urban soil using flotation technique.

    PubMed

    Dermont, G; Bergeron, M; Richer-Laflèche, M; Mercier, G

    2010-02-01

    A soil washing process using froth flotation technique was evaluated for the removal of arsenic, cadmium, copper, lead, and zinc from a highly contaminated urban soil (brownfield) after crushing of the particle-size fractions >250microm. The metal contaminants were in particulate forms and distributed in all the particle-size fractions. The particle-by-particle study with SEM-EDS showed that Zn was mainly present as sphalerite (ZnS), whereas Cu and Pb were mainly speciated as various oxide/carbonate compounds. The influence of surfactant collector type (non-ionic and anionic), collector dosage, pulp pH, a chemical activation step (sulfidization), particle size, and process time on metal removal efficiency and flotation selectivity was studied. Satisfactory results in metal recovery (42-52%), flotation selectivity (concentration factor>2.5), and volume reduction (>80%) were obtained with anionic collector (potassium amyl xanthate). The transportation mechanisms involved in the separation process (i.e., the true flotation and the mechanical entrainment) were evaluated by the pulp chemistry, the metal speciation, the metal distribution in the particle-size fractions, and the separation selectivity indices of Zn/Ca and Zn/Fe. The investigations showed that a great proportion of metal-containing particles were recovered in the froth layer by entrainment mechanism rather than by true flotation process. The non-selective entrainment mechanism of the fine particles (<20 microm) caused a flotation selectivity drop, especially with a long flotation time (>5 min) and when a high collector dose is used. The intermediate particle-size fraction (20-125 microm) showed the best flotation selectivity. Copyright 2009 Elsevier B.V. All rights reserved.

  15. 3D-Printed Visceral Aneurysm Models Based on CT Data for Simulations of Endovascular Embolization: Evaluation of Size and Shape Accuracy.

    PubMed

    Shibata, Eisuke; Takao, Hidemasa; Amemiya, Shiori; Ohtomo, Kuni

    2017-08-01

    The objective of this study is to verify the accuracy of 3D-printed hollow models of visceral aneurysms created from CT angiography (CTA) data, by evaluating the sizes and shapes of aneurysms and related arteries. From March 2006 to August 2015, 19 true visceral aneurysms were embolized via interventional radiologic treatment provided by the radiology department at our institution; aneurysms with bleeding (n = 3) or without thin-slice (< 1 mm) preembolization CT data (n = 1) were excluded. A total of 15 consecutive true visceral aneurysms from 11 patients (eight women and three men; mean age, 61 years; range, 53-72 years) whose aneurysms were embolized via endovascular procedures were included in this study. Three-dimensional-printed hollow models of aneurysms and related arteries were fabricated from CTA data. The accuracies of the sizes and shapes of the 3D-printed hollow models were evaluated using the nonparametric Wilcoxon signed rank test and the Dice coefficient index. Aneurysm sizes ranged from 138 to 18,691 mm 3 (diameter, 6.1-35.7 mm), and no statistically significant difference was noted between patient data and 3D-printed models (p = 0.56). Shape analysis of whole aneurysms and related arteries indicated a high level of accuracy (Dice coefficient index value, 84.2-95.8%; mean [± SD], 91.1 ± 4.1%). The sizes and shapes of 3D-printed hollow visceral aneurysm models created from CTA data were accurate. These models can be used for simulations of endovascular treatment and precise anatomic information.

  16. False memory and level of processing effect: an event-related potential study.

    PubMed

    Beato, Maria Soledad; Boldini, Angela; Cadavid, Sara

    2012-09-12

    Event-related potentials (ERPs) were used to determine the effects of level of processing on true and false memory, using the Deese-Roediger-McDermott (DRM) paradigm. In the DRM paradigm, lists of words highly associated to a single nonpresented word (the 'critical lure') are studied and, in a subsequent memory test, critical lures are often falsely remembered. Lists with three critical lures per list were auditorily presented here to participants who studied them with either a shallow (saying whether the word contained the letter 'o') or a deep (creating a mental image of the word) processing task. Visual presentation modality was used on a final recognition test. True recognition of studied words was significantly higher after deep encoding, whereas false recognition of nonpresented critical lures was similar in both experimental groups. At the ERP level, true and false recognition showed similar patterns: no FN400 effect was found, whereas comparable left parietal and late right frontal old/new effects were found for true and false recognition in both experimental conditions. Items studied under shallow encoding conditions elicited more positive ERP than items studied under deep encoding conditions at a 1000-1500 ms interval. These ERP results suggest that true and false recognition share some common underlying processes. Differential effects of level of processing on true and false memory were found only at the behavioral level but not at the ERP level.

  17. Grid-Independent Large-Eddy Simulation in Turbulent Channel Flow using Three-Dimensional Explicit Filtering

    NASA Technical Reports Server (NTRS)

    Gullbrand, Jessica

    2003-01-01

    In this paper, turbulence-closure models are evaluated using the 'true' LES approach in turbulent channel flow. The study is an extension of the work presented by Gullbrand (2001), where fourth-order commutative filter functions are applied in three dimensions in a fourth-order finite-difference code. The true LES solution is the grid-independent solution to the filtered governing equations. The solution is obtained by keeping the filter width constant while the computational grid is refined. As the grid is refined, the solution converges towards the true LES solution. The true LES solution will depend on the filter width used, but will be independent of the grid resolution. In traditional LES, because the filter is implicit and directly connected to the grid spacing, the solution converges towards a direct numerical simulation (DNS) as the grid is refined, and not towards the solution of the filtered Navier-Stokes equations. The effect of turbulence-closure models is therefore difficult to determine in traditional LES because, as the grid is refined, more turbulence length scales are resolved and less influence from the models is expected. In contrast, in the true LES formulation, the explicit filter eliminates all scales that are smaller than the filter cutoff, regardless of the grid resolution. This ensures that the resolved length-scales do not vary as the grid resolution is changed. In true LES, the cell size must be smaller than or equal to the cutoff length scale of the filter function. The turbulence-closure models investigated are the dynamic Smagorinsky model (DSM), the dynamic mixed model (DMM), and the dynamic reconstruction model (DRM). These turbulence models were previously studied using two-dimensional explicit filtering in turbulent channel flow by Gullbrand & Chow (2002). The DSM by Germano et al. (1991) is used as the USFS model in all the simulations. This enables evaluation of different reconstruction models for the RSFS stresses. The DMM consists of the scale-similarity model (SSM) by Bardina et al. (1983), which is an RSFS model, in linear combination with the DSM. In the DRM, the RSFS stresses are modeled by using an estimate of the unfiltered velocity in the unclosed term, while the USFS stresses are modeled by the DSM. The DSM and the DMM are two commonly used turbulence-closure models, while the DRM is a more recent model.

  18. Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Wahi, A. K.

    2003-12-01

    Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid discriminator of whether or not the estimator provides accurate estimates of the gradient magnitude and orientation. This research was funded by WIPP programs administered by the U.S Department of Energy. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  19. Scaling rates of true polar wander in convecting planets and moons

    NASA Astrophysics Data System (ADS)

    Rose, Ian; Buffett, Bruce

    2017-12-01

    Mass redistribution in the convecting mantle of a planet causes perturbations in its moment of inertia tensor. Conservation of angular momentum dictates that these perturbations change the direction of the rotation vector of the planet, a process known as true polar wander (TPW). Although the existence of TPW on Earth is firmly established, its rate and magnitude over geologic time scales remain controversial. Here we present scaling analyses and numerical simulations of TPW due to mantle convection over a range of parameter space relevant to planetary interiors. For simple rotating convection, we identify a set of dimensionless parameters that fully characterize true polar wander. We use these parameters to define timescales for the growth of moment of inertia perturbations due to convection and for their relaxation due to true polar wander. These timescales, as well as the relative sizes of convective anomalies, control the rate and magnitude of TPW. This analysis also clarifies the nature of so called "inertial interchange" TPW events, and relates them to a broader class of events that enable large and often rapid TPW. We expect these events to have been more frequent in Earth's past.

  20. Only pick the right grains: Modelling the bias due to subjective grain-size interval selection for chronometric and fingerprinting approaches.

    NASA Astrophysics Data System (ADS)

    Dietze, Michael; Fuchs, Margret; Kreutzer, Sebastian

    2016-04-01

    Many modern approaches of radiometric dating or geochemical fingerprinting rely on sampling sedimentary deposits. A key assumption of most concepts is that the extracted grain-size fraction of the sampled sediment adequately represents the actual process to be dated or the source area to be fingerprinted. However, these assumptions are not always well constrained. Rather, they have to align with arbitrary, method-determined size intervals, such as "coarse grain" or "fine grain" with partly even different definitions. Such arbitrary intervals violate principal process-based concepts of sediment transport and can thus introduce significant bias to the analysis outcome (i.e., a deviation of the measured from the true value). We present a flexible numerical framework (numOlum) for the statistical programming language R that allows quantifying the bias due to any given analysis size interval for different types of sediment deposits. This framework is applied to synthetic samples from the realms of luminescence dating and geochemical fingerprinting, i.e. a virtual reworked loess section. We show independent validation data from artificially dosed and subsequently mixed grain-size proportions and we present a statistical approach (end-member modelling analysis, EMMA) that allows accounting for the effect of measuring the compound dosimetric history or geochemical composition of a sample. EMMA separates polymodal grain-size distributions into the underlying transport process-related distributions and their contribution to each sample. These underlying distributions can then be used to adjust grain-size preparation intervals to minimise the incorporation of "undesired" grain-size fractions.

  1. [The relationship between the placebo effect and spontaneous improvement in research on antidepressants. Are placebos powerless?].

    PubMed

    Hougaard, Esben

    2005-08-08

    Clinical trials of antidepressant medications have generally found large changes in groups given a placebo, which may be due to either spontaneous remission or a true placebo effect. This paper reviews the evidence for a true placebo effect in the treatment of unipolar depressed outpatients. Although there is no evidence from experimental studies, a rather substantial amount of circumstantial evidence indicates a true placebo effect. This article raises the question of whether it is meaningful to require experimental evidence for a loose and unspecified concept involving varying components such as placebo.

  2. 32 CFR 644.488 - Soliciting applications for purchase of chapels.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 4 2014-07-01 2013-07-01 true Soliciting applications for purchase of chapels. 644.488 Section 644.488 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY... applying. (c) Membership size of the church/organization. (d) History of the church/organization and when...

  3. 32 CFR 644.488 - Soliciting applications for purchase of chapels.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 4 2012-07-01 2011-07-01 true Soliciting applications for purchase of chapels. 644.488 Section 644.488 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY... applying. (c) Membership size of the church/organization. (d) History of the church/organization and when...

  4. 32 CFR 644.488 - Soliciting applications for purchase of chapels.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 4 2010-07-01 2010-07-01 true Soliciting applications for purchase of chapels. 644.488 Section 644.488 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY... applying. (c) Membership size of the church/organization. (d) History of the church/organization and when...

  5. Portrait of Pluto and Charon

    NASA Image and Video Library

    2015-07-17

    These two images of Pluto and Charon were collected separately by NASA New Horizons during approach on July 13 and July 14, 2015. The relative reflectivity, size, separation, and orientations, and colors are approximated in this composite image, and they are shown in approximate true color. http://photojournal.jpl.nasa.gov/catalog/PIA19717

  6. Sample sizes needed for specified margins of relative error in the estimates of the repeatability and reproducibility standard deviations.

    PubMed

    McClure, Foster D; Lee, Jung K

    2005-01-01

    Sample size formulas are developed to estimate the repeatability and reproducibility standard deviations (Sr and S(R)) such that the actual error in (Sr and S(R)) relative to their respective true values, sigmar and sigmaR, are at predefined levels. The statistical consequences associated with AOAC INTERNATIONAL required sample size to validate an analytical method are discussed. In addition, formulas to estimate the uncertainties of (Sr and S(R)) were derived and are provided as supporting documentation. Formula for the Number of Replicates Required for a Specified Margin of Relative Error in the Estimate of the Repeatability Standard Deviation.

  7. Determination of the rCBF in the Amygdala and Rhinal Cortex Using a FAIR-TrueFISP Sequence

    PubMed Central

    Martirosian, Petros; Klose, Uwe; Nägele, Thomas; Schick, Fritz; Ernemann, Ulrike

    2011-01-01

    Objective Brain perfusion can be assessed non-invasively by modern arterial spin labeling MRI. The FAIR (flow-sensitive alternating inversion recovery)-TrueFISP (true fast imaging in steady precession) technique was applied for regional assessment of cerebral blood flow in brain areas close to the skull base, since this approach provides low sensitivity to magnetic susceptibility effects. The investigation of the rhinal cortex and the amygdala is a potentially important feature for the diagnosis and research on dementia in its early stages. Materials and Methods Twenty-three subjects with no structural or psychological impairment were investigated. FAIR-True-FISP quantitative perfusion data were evaluated in the amygdala on both sides and in the pons. A preparation of the radiofrequency FOCI (frequency offset corrected inversion) pulse was used for slice selective inversion. After a time delay of 1.2 sec, data acquisition began. Imaging slice thickness was 5 mm and inversion slab thickness for slice selective inversion was 12.5 mm. Image matrix size for perfusion images was 64 × 64 with a field of view of 256 × 256 mm, resulting in a spatial resolution of 4 × 4 × 5 mm. Repetition time was 4.8 ms; echo time was 2.4 ms. Acquisition time for the 50 sets of FAIR images was 6:56 min. Data were compared with perfusion data from the literature. Results Perfusion values in the right amygdala, left amygdala and pons were 65.2 (± 18.2) mL/100 g/minute, 64.6 (± 21.0) mL/100 g/minute, and 74.4 (± 19.3) mL/100 g/minute, respectively. These values were higher than formerly published data using continuous arterial spin labeling but similar to 15O-PET (oxygen-15 positron emission tomography) data. Conclusion The FAIR-TrueFISP approach is feasible for the quantitative assessment of perfusion in the amygdala. Data are comparable with formerly published data from the literature. The applied technique provided excellent image quality, even for brain regions located at the skull base in the vicinity of marked susceptibility steps. PMID:21927556

  8. Spatial Statistical Data Fusion (SSDF)

    NASA Technical Reports Server (NTRS)

    Braverman, Amy J.; Nguyen, Hai M.; Cressie, Noel

    2013-01-01

    As remote sensing for scientific purposes has transitioned from an experimental technology to an operational one, the selection of instruments has become more coordinated, so that the scientific community can exploit complementary measurements. However, tech nological and scientific heterogeneity across devices means that the statistical characteristics of the data they collect are different. The challenge addressed here is how to combine heterogeneous remote sensing data sets in a way that yields optimal statistical estimates of the underlying geophysical field, and provides rigorous uncertainty measures for those estimates. Different remote sensing data sets may have different spatial resolutions, different measurement error biases and variances, and other disparate characteristics. A state-of-the-art spatial statistical model was used to relate the true, but not directly observed, geophysical field to noisy, spatial aggregates observed by remote sensing instruments. The spatial covariances of the true field and the covariances of the true field with the observations were modeled. The observations are spatial averages of the true field values, over pixels, with different measurement noise superimposed. A kriging framework is used to infer optimal (minimum mean squared error and unbiased) estimates of the true field at point locations from pixel-level, noisy observations. A key feature of the spatial statistical model is the spatial mixed effects model that underlies it. The approach models the spatial covariance function of the underlying field using linear combinations of basis functions of fixed size. Approaches based on kriging require the inversion of very large spatial covariance matrices, and this is usually done by making simplifying assumptions about spatial covariance structure that simply do not hold for geophysical variables. In contrast, this method does not require these assumptions, and is also computationally much faster. This method is fundamentally different than other approaches to data fusion for remote sensing data because it is inferential rather than merely descriptive. All approaches combine data in a way that minimizes some specified loss function. Most of these are more or less ad hoc criteria based on what looks good to the eye, or some criteria that relate only to the data at hand.

  9. Estimated abundance of wild burros surveyed on Bureau of Land Management Lands in 2014

    USGS Publications Warehouse

    Griffin, Paul C.

    2015-01-01

    The Bureau of Land Management (BLM) requires accurate estimates of the numbers of wild horses (Equus ferus caballus) and burros (Equus asinus) living on the lands it manages. For over ten years, BLM in Arizona has used the simultaneous double-observer method of recording wild burros during aerial surveys and has reported population estimates for those surveys that come from two formulations of a Lincoln-Petersen type of analysis (Graham and Bell, 1989). In this report, I provide those same two types of burro population analysis for 2014 aerial survey data from six herd management areas (HMAs) in Arizona, California, Nevada, and Utah. I also provide burro population estimates based on a different form of simultaneous double-observer analysis, now in widespread use for wild horse surveys that takes into account the potential effects on detection probability of sighting covariates including group size, distance, vegetative cover, and other factors (Huggins, 1989, 1991). The true number of burros present in the six areas surveyed was not known, so population estimates made with these three types of analyses cannot be directly tested for accuracy in this report. I discuss theoretical reasons why the Huggins (1989, 1991) type of analysis should provide less biased estimates of population size than the Lincoln-Petersen analyses and why estimates from all forms of double-observer analyses are likely to be lower than the true number of animals present in the surveyed areas. I note reasons why I suggest using burro observations made at all available distances in analyses, not only those within 200 meters of the flight path. For all analytical methods, small sample sizes of observed groups can be problematic, but that sample size can be increased over time for Huggins (1989, 1991) analyses by pooling observations. I note ways by which burro population estimates could be tested for accuracy when there are radio-collared animals in the population or when there are simultaneous double-observer surveys before and after a burro gather and removal.

  10. Imputation of a true endpoint from a surrogate: application to a cluster randomized controlled trial with partial information on the true endpoint.

    PubMed

    Nixon, Richard M; Duffy, Stephen W; Fender, Guy R K

    2003-09-24

    The Anglia Menorrhagia Education Study (AMES) is a randomized controlled trial testing the effectiveness of an education package applied to general practices. Binary data are available from two sources; general practitioner reported referrals to hospital, and referrals to hospital determined by independent audit of the general practices. The former may be regarded as a surrogate for the latter, which is regarded as the true endpoint. Data are only available for the true end point on a sub set of the practices, but there are surrogate data for almost all of the audited practices and for most of the remaining practices. The aim of this paper was to estimate the treatment effect using data from every practice in the study. Where the true endpoint was not available, it was estimated by three approaches, a regression method, multiple imputation and a full likelihood model. Including the surrogate data in the analysis yielded an estimate of the treatment effect which was more precise than an estimate gained from using the true end point data alone. The full likelihood method provides a new imputation tool at the disposal of trials with surrogate data.

  11. Recovering 3D particle size distributions from 2D sections

    NASA Astrophysics Data System (ADS)

    Cuzzi, Jeffrey N.; Olson, Daniel M.

    2017-03-01

    We discuss different ways to convert observed, apparent particle size distributions from 2D sections (thin sections, SEM maps on planar surfaces, etc.) into true 3D particle size distributions. We give a simple, flexible, and practical method to do this; show which of these techniques gives the most faithful conversions; and provide (online) short computer codes to calculate both 2D-3D recoveries and simulations of 2D observations by random sectioning. The most important systematic bias of 2D sectioning, from the standpoint of most chondrite studies, is an overestimate of the abundance of the larger particles. We show that fairly good recoveries can be achieved from observed size distributions containing 100-300 individual measurements of apparent particle diameter.

  12. A measurement error model for physical activity level as measured by a questionnaire with application to the 1999-2006 NHANES questionnaire.

    PubMed

    Tooze, Janet A; Troiano, Richard P; Carroll, Raymond J; Moshfegh, Alanna J; Freedman, Laurence S

    2013-06-01

    Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999-2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40-69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999-2000). Valid estimates of participants' total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level ("truth"). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32-0.41); attenuation factors (0.43-0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error-adjusted estimates of relationships between physical activity and disease.

  13. Significance of the model considering mixed grain-size for inverse analysis of turbidites

    NASA Astrophysics Data System (ADS)

    Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.

    2016-12-01

    A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.

  14. Feeling the future: A meta-analysis of 90 experiments on the anomalous anticipation of random future events

    PubMed Central

    Bem, Daryl; Tressoldi, Patrizio; Rabeyron, Thomas; Duggan, Michael

    2016-01-01

    In 2011, one of the authors (DJB) published a report of nine experiments in the Journal of Personality and Social Psychology purporting to demonstrate that an individual’s cognitive and affective responses can be influenced by randomly selected stimulus events that do not occur until after his or her responses have already been made and recorded, a generalized variant of the phenomenon traditionally denoted by the term precognition. To encourage replications, all materials needed to conduct them were made available on request. We here report a meta-analysis of 90 experiments from 33 laboratories in 14 countries which yielded an overall effect greater than 6 sigma, z = 6.40, p = 1.2 × 10 -10  with an effect size (Hedges’ g) of 0.09. A Bayesian analysis yielded a Bayes Factor of 5.1 × 10 9, greatly exceeding the criterion value of 100 for “decisive evidence” in support of the experimental hypothesis. When DJB’s original experiments are excluded, the combined effect size for replications by independent investigators is 0.06, z = 4.16, p = 1.1 × 10 -5, and the BF value is 3,853, again exceeding the criterion for “decisive evidence.” The number of potentially unretrieved experiments required to reduce the overall effect size of the complete database to a trivial value of 0.01 is 544, and seven of eight additional statistical tests support the conclusion that the database is not significantly compromised by either selection bias or by intense “ p-hacking”—the selective suppression of findings or analyses that failed to yield statistical significance. P-curve analysis, a recently introduced statistical technique, estimates the true effect size of the experiments to be 0.20 for the complete database and 0.24 for the independent replications, virtually identical to the effect size of DJB’s original experiments (0.22) and the closely related “presentiment” experiments (0.21). We discuss the controversial status of precognition and other anomalous effects collectively known as psi. PMID:26834996

  15. The effect of PeakForce tapping mode AFM imaging on the apparent shape of surface nanobubbles.

    PubMed

    Walczyk, Wiktoria; Schön, Peter M; Schönherr, Holger

    2013-05-08

    Until now, TM AFM (tapping mode or intermittent contact mode atomic force microscopy) has been the most often applied direct imaging technique to analyze surface nanobubbles at the solid-aqueous interface. While the presence and number density of nanobubbles can be unequivocally detected and estimated, it remains unclear how much the a priori invasive nature of AFM affects the apparent shapes and dimensions of the nanobubbles. To be able to successfully address the unsolved questions in this field, the accurate knowledge of the nanobubbles' dimensions, radii of curvature etc is necessary. In this contribution we present a comparative study of surface nanobubbles on HOPG (highly oriented pyrolytic graphite) in water acquired with (i) TM AFM and (ii) the recently introduced PFT (PeakForce tapping) mode, in which the force exerted on the nanobubbles rather than the amplitude of the resonating cantilever is used as the AFM feedback parameter during imaging. In particular, we analyzed how the apparent size and shape of nanobubbles depend on the maximum applied force in PFT AFM. Even for forces as small as 73 pN, the nanobubbles appeared smaller than their true size, which was estimated from an extrapolation of the bubble height to zero applied force. In addition, the size underestimation was found to be more pronounced for larger bubbles. The extrapolated true nanoscopic contact angles for nanobubbles on HOPG, measured in PFT AFM, ranged from 145° to 175° and were only slightly underestimated by scanning with non-zero forces. This result was comparable to the nanoscopic contact angles of 160°-175° measured using TM AFM in the same set of experiments. Both values disagree, in accordance with the literature, with the macroscopic contact angle of water on HOPG, measured here to be 63° ± 2°.

  16. Feeding, Swimming and Navigation of Colonial Microorganisms

    NASA Astrophysics Data System (ADS)

    Kirkegaard, Julius; Bouillant, Ambre; Marron, Alan; Leptos, Kyriacos; Goldstein, Raymond

    2016-11-01

    Animals are multicellular in nature, but evolved from unicellular organisms. In the closest relatives of animals, the choanoflagellates, the unicellular species Salpincgoeca rosetta has the ability to form colonies, resembling true multicellularity. In this work we use a combination of experiments, theory, and simulations to understand the physical differences that arise from feeding, swimming and navigating as colonies instead of as single cells. We show that the feeding efficiency decreases with colony size for distinct reasons in the small and large Péclet number limits, and we find that swimming as a colony changes the conventional active random walks of microorganism to stochastic helices, but that this does not hinder effective navigation towards chemoattractants.

  17. FROZEN RAW FOODS AS SKIN-TESTING MATERIALS—Further Studies of Use in Cases of Allergic Disorders

    PubMed Central

    Ancona, Giacomo R.; Schumacher, Irwin C.

    1954-01-01

    In further studies on the use of frozen raw food as skin-testing material in patients with allergic disorders, the results of previous work were confirmed in a greater number of subjects using a larger number of foods: Tests with frozen raw foods by the scratch method induce true positive reactions of a larger size and in greater frequency than the corresponding commercial extracts by either the scratch or the intracutaneous method. Storage in the frozen state for several years does not affect the antigenic potency of the materials. The frozen preparations have caused no harmful effects in the subjects, are free from irritant properties, and are not urticariogenic. PMID:13126823

  18. LD Score Regression Distinguishes Confounding from Polygenicity in Genome-Wide Association Studies

    PubMed Central

    Bulik-Sullivan, Brendan K.; Loh, Po-Ru; Finucane, Hilary; Ripke, Stephan; Yang, Jian; Patterson, Nick; Daly, Mark J.; Price, Alkes L.; Neale, Benjamin M.

    2015-01-01

    Both polygenicity (i.e., many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield an inflated distribution of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from true polygenic signal and bias. We have developed an approach, LD Score regression, that quantifies the contribution of each by examining the relationship between test statistics and linkage disequilibrium (LD). The LD Score regression intercept can be used to estimate a more powerful and accurate correction factor than genomic control. We find strong evidence that polygenicity accounts for the majority of test statistic inflation in many GWAS of large sample size. PMID:25642630

  19. Defensive behaviors of the Oriental armyworm Mythimna separata in response to different parasitoid species (Hymenoptera: Braconidae).

    PubMed

    Zhou, Jincheng; Meng, Ling; Li, Baoping

    2017-01-01

    This study examined defensive behaviors of Mythimna separata (Lepidoptera: Noctuidae) larvae varying in body size in response to two parasitoids varying in oviposition behavior; Microplitis mediator females sting the host with the ovipositor after climbing onto it while Meteorus pulchricornis females make the sting by standing at a close distance from the host. Mythimna separata larvae exhibited evasive (escaping and dropping) and aggressive (thrashing) behaviors to defend themselves against parasitoids M. mediator and M. pulchricornis . Escaping and dropping did not change in probability with host body size or parasitoid species. Thrashing did not vary in frequency with host body size, yet performed more frequently in response to M. mediator than to M. pulchricornis . Parasitoid handling time and stinging likelihood varied depending not only on host body size but also on parasitoid species. Parasitoid handling time increased with host thrashing frequency, similar in slope for both parasitoids yet on a higher intercept for M. mediator than for M. pulchricornis . Handling time decreased with host size for M. pulchricornis but not for M. mediator . The likelihood of realizing an ovipositor sting decreased with thrashing frequency of both small and large hosts for M. pulchricornis , while this was true only for large hosts for M. mediator . Our results suggest that the thrashing behavior of M. separata larvae has a defensive effect on parasitism, depending on host body size and parasitoid species with different oviposition behaviors.

  20. Defensive behaviors of the Oriental armyworm Mythimna separata in response to different parasitoid species (Hymenoptera: Braconidae)

    PubMed Central

    Zhou, Jincheng; Meng, Ling

    2017-01-01

    This study examined defensive behaviors of Mythimna separata (Lepidoptera: Noctuidae) larvae varying in body size in response to two parasitoids varying in oviposition behavior; Microplitis mediator females sting the host with the ovipositor after climbing onto it while Meteorus pulchricornis females make the sting by standing at a close distance from the host. Mythimna separata larvae exhibited evasive (escaping and dropping) and aggressive (thrashing) behaviors to defend themselves against parasitoids M. mediator and M. pulchricornis. Escaping and dropping did not change in probability with host body size or parasitoid species. Thrashing did not vary in frequency with host body size, yet performed more frequently in response to M. mediator than to M. pulchricornis. Parasitoid handling time and stinging likelihood varied depending not only on host body size but also on parasitoid species. Parasitoid handling time increased with host thrashing frequency, similar in slope for both parasitoids yet on a higher intercept for M. mediator than for M. pulchricornis. Handling time decreased with host size for M. pulchricornis but not for M. mediator. The likelihood of realizing an ovipositor sting decreased with thrashing frequency of both small and large hosts for M. pulchricornis, while this was true only for large hosts for M. mediator. Our results suggest that the thrashing behavior of M. separata larvae has a defensive effect on parasitism, depending on host body size and parasitoid species with different oviposition behaviors. PMID:28852593

  1. SOME ENGINEERING PROPERTIES OF SHELLED AND KERNEL TEA (Camellia sinensis) SEEDS.

    PubMed

    Altuntas, Ebubekir; Yildiz, Merve

    2017-01-01

    Camellia sinensis is the source of tea leaves and it is an economic crop now grown around the World. Tea seed oil has been used for cooking in China and other Asian countries for more than a thousand years. Tea is the most widely consumed beverages after water in the world. It is mainly produced in Asia, central Africa, and exported throughout the World. Some engineering properties (size dimensions, sphericity, volume, bulk and true densities, friction coefficient, colour characteristics and mechanical behaviour as rupture force of shelled and kernel tea ( Camellia sinensis ) seeds were determined in this study. This research was carried out for shelled and kernel tea seeds. The shelled tea seeds used in this study were obtained from East-Black Sea Tea Cooperative Institution in Rize city of Turkey. Shelled and kernel tea seeds were characterized as large and small sizes. The average geometric mean diameter and seed mass of the shelled tea seeds were 15.8 mm, 10.7 mm (large size); 1.47 g, 0.49 g (small size); while the average geometric mean diameter and seed mass of the kernel tea seeds were 11.8 mm, 8 mm for large size; 0.97 g, 0.31 g for small size, respectively. The sphericity, surface area and volume values were found to be higher in a larger size than small size for the shelled and kernel tea samples. The shelled tea seed's colour intensity (Chroma) were found between 59.31 and 64.22 for large size, while the kernel tea seed's chroma values were found between 56.04 68.34 for large size, respectively. The rupture force values of kernel tea seeds were higher than shelled tea seeds for the large size along X axis; whereas, the rupture force values of along X axis were higher than Y axis for large size of shelled tea seeds. The static coefficients of friction of shelled and kernel tea seeds for the large and small sizes higher values for rubber than the other friction surfaces. Some engineering properties, such as geometric mean diameter, sphericity, volume, bulk and true densities, the coefficient of friction, L*, a*, b* colour characteristics and rupture force of shelled and kernel tea ( Camellia sinensis ) seeds will serve to design the equipment used in postharvest treatments.

  2. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  3. Measurement of Flaw Size From Thermographic Data

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Zalameda, Joseph N.; Howell, Patricia A.

    2015-01-01

    Simple methods for reducing the pulsed thermographic responses of delaminations tend to overestimate the size of the delamination, since the heat diffuses in the plane parallel to the surface. The result is a temperature profile over the delamination which is larger than the delamination size. A variational approach is presented for reducing the thermographic data to produce an estimated size for a flaw that is much closer to the true size of the delamination. The method is based on an estimate for the thermal response that is a convolution of a Gaussian kernel with the shape of the flaw. The size is determined from both the temporal and spatial thermal response of the exterior surface above the delamination and constraints on the length of the contour surrounding the delamination. Examples of the application of the technique to simulation and experimental data are presented to investigate the limitations of the technique.

  4. Role of Computer Aided Diagnosis (CAD) in the detection of pulmonary nodules on 64 row multi detector computed tomography

    PubMed Central

    Prakashini, K; Babu, Satish; Rajgopal, KV; Kokila, K Raja

    2016-01-01

    Aims and Objectives: To determine the overall performance of an existing CAD algorithm with thin-section computed tomography (CT) in the detection of pulmonary nodules and to evaluate detection sensitivity at a varying range of nodule density, size, and location. Materials and Methods: A cross-sectional prospective study was conducted on 20 patients with 322 suspected nodules who underwent diagnostic chest imaging using 64-row multi-detector CT. The examinations were evaluated on reconstructed images of 1.4 mm thickness and 0.7 mm interval. Detection of pulmonary nodules, initially by a radiologist of 2 years experience (RAD) and later by CAD lung nodule software was assessed. Then, CAD nodule candidates were accepted or rejected accordingly. Detected nodules were classified based on their size, density, and location. The performance of the RAD and CAD system was compared with the gold standard that is true nodules confirmed by consensus of senior RAD and CAD together. The overall sensitivity and false-positive (FP) rate of CAD software was calculated. Observations and Results: Of the 322 suspected nodules, 221 were classified as true nodules on the consensus of senior RAD and CAD together. Of the true nodules, the RAD detected 206 (93.2%) and 202 (91.4%) by the CAD. CAD and RAD together picked up more number of nodules than either CAD or RAD alone. Overall sensitivity for nodule detection with the CAD program was 91.4%, and FP detection per patient was 5.5%. The CAD showed comparatively higher sensitivity for nodules of size 4–10 mm (93.4%) and nodules in hilar (100%) and central (96.5%) location when compared to RAD's performance. Conclusion: CAD performance was high in detecting pulmonary nodules including the small size and low-density nodules. CAD even with relatively high FP rate, assists and improves RAD's performance as a second reader, especially for nodules located in the central and hilar region and for small nodules by saving RADs time. PMID:27578931

  5. A universal approximation to grain size from images of non-cohesive sediment

    USGS Publications Warehouse

    Buscombe, D.; Rubin, D.M.; Warrick, J.A.

    2010-01-01

    The two-dimensional spectral decomposition of an image of sediment provides a direct statistical estimate, grid-by-number style, of the mean of all intermediate axes of all single particles within the image. We develop and test this new method which, unlike existing techniques, requires neither image processing algorithms for detection and measurement of individual grains, nor calibration. The only information required of the operator is the spatial resolution of the image. The method is tested with images of bed sediment from nine different sedimentary environments (five beaches, three rivers, and one continental shelf), across the range 0.1 mm to 150 mm, taken in air and underwater. Each population was photographed using a different camera and lighting conditions. We term it a “universal approximation” because it has produced accurate estimates for all populations we have tested it with, without calibration. We use three approaches (theory, computational experiments, and physical experiments) to both understand and explore the sensitivities and limits of this new method. Based on 443 samples, the root-mean-squared (RMS) error between size estimates from the new method and known mean grain size (obtained from point counts on the image) was found to be ±≈16%, with a 95% probability of estimates within ±31% of the true mean grain size (measured in a linear scale). The RMS error reduces to ≈11%, with a 95% probability of estimates within ±20% of the true mean grain size if point counts from a few images are used to correct bias for a specific population of sediment images. It thus appears it is transferable between sedimentary populations with different grain size, but factors such as particle shape and packing may introduce bias which may need to be calibrated for. For the first time, an attempt has been made to mathematically relate the spatial distribution of pixel intensity within the image of sediment to the grain size.

  6. A universal approximation of grain size from images of noncohesive sediment

    NASA Astrophysics Data System (ADS)

    Buscombe, D.; Rubin, D. M.; Warrick, J. A.

    2010-06-01

    The two-dimensional spectral decomposition of an image of sediment provides a direct statistical estimate, grid-by-number style, of the mean of all intermediate axes of all single particles within the image. We develop and test this new method which, unlike existing techniques, requires neither image processing algorithms for detection and measurement of individual grains, nor calibration. The only information required of the operator is the spatial resolution of the image. The method is tested with images of bed sediment from nine different sedimentary environments (five beaches, three rivers, and one continental shelf), across the range 0.1 mm to 150 mm, taken in air and underwater. Each population was photographed using a different camera and lighting conditions. We term it a "universal approximation" because it has produced accurate estimates for all populations we have tested it with, without calibration. We use three approaches (theory, computational experiments, and physical experiments) to both understand and explore the sensitivities and limits of this new method. Based on 443 samples, the root-mean-squared (RMS) error between size estimates from the new method and known mean grain size (obtained from point counts on the image) was found to be ±≈16%, with a 95% probability of estimates within ±31% of the true mean grain size (measured in a linear scale). The RMS error reduces to ≈11%, with a 95% probability of estimates within ±20% of the true mean grain size if point counts from a few images are used to correct bias for a specific population of sediment images. It thus appears it is transferable between sedimentary populations with different grain size, but factors such as particle shape and packing may introduce bias which may need to be calibrated for. For the first time, an attempt has been made to mathematically relate the spatial distribution of pixel intensity within the image of sediment to the grain size.

  7. Joint correction of respiratory motion artifact and partial volume effect in lung/thoracic PET/CT imaging.

    PubMed

    Chang, Guoping; Chang, Tingting; Pan, Tinsu; Clark, John W; Mawlawi, Osama R

    2010-12-01

    Respiratory motion artifacts and partial volume effects (PVEs) are two degrading factors that affect the accuracy of image quantification in PET/CT imaging. In this article, the authors propose a joint motion and PVE correction approach (JMPC) to improve PET quantification by simultaneously correcting for respiratory motion artifacts and PVE in patients with lung/thoracic cancer. The objective of this article is to describe this approach and evaluate its performance using phantom and patient studies. The proposed joint correction approach incorporates a model of motion blurring, PVE, and object size/shape. A motion blurring kernel (MBK) is then estimated from the deconvolution of the joint model, while the activity concentration (AC) of the tumor is estimated from the normalization of the derived MBK. To evaluate the performance of this approach, two phantom studies and eight patient studies were performed. In the phantom studies, two motion waveforms-a linear sinusoidal and a circular motion-were used to control the motion of a sphere, while in the patient studies, all participants were instructed to breathe regularly. For the phantom studies, the resultant MBK was compared to the true MBK by measuring a correlation coefficient between the two kernels. The measured sphere AC derived from the proposed method was compared to the true AC as well as the ACs in images exhibiting PVE only and images exhibiting both PVE and motion blurring. For the patient studies, the resultant MBK was compared to the motion extent derived from a 4D-CT study, while the measured tumor AC was compared to the AC in images exhibiting both PVE and motion blurring. For the phantom studies, the estimated MBK approximated the true MBK with an average correlation coefficient of 0.91. The tumor ACs following the joint correction technique were similar to the true AC with an average difference of 2%. Furthermore, the tumor ACs on the PVE only images and images with both motion blur and PVE effects were, on average, 75% and 47.5% (10%) of the true AC, respectively, for the linear (circular) motion phantom study. For the patient studies, the maximum and mean AC/SUV on the PET images following the joint correction are, on average, increased by 125.9% and 371.6%, respectively, when compared to the PET images with both PVE and motion. The motion extents measured from the derived MBK and 4D-CT exhibited an average difference of 1.9 mm. The proposed joint correction approach can improve the accuracy of PET quantification by simultaneously compensating for the respiratory motion artifacts and PVE in lung/thoracic PET/CT imaging.

  8. General herpetological collecting is size-based for five Pacific lizards

    USGS Publications Warehouse

    Rodda, Gordon H.; Yackel Adams, Amy A.; Campbell, Earl W.; Fritts, Thomas H.

    2015-01-01

    Accurate estimation of a species’ size distribution is a key component of characterizing its ecology, evolution, physiology, and demography. We compared the body size distributions of five Pacific lizards (Carlia ailanpalai, Emoia caeruleocauda, Gehyra mutilata, Hemidactylus frenatus, and Lepidodactylus lugubris) from general herpetological collecting (including visual surveys and glue boards) with those from complete censuses obtained by total removal. All species exhibited the same pattern: general herpetological collecting undersampled juveniles and oversampled mid-sized adults. The bias was greatest for the smallest juveniles and was not statistically evident for newly maturing and very large adults. All of the true size distributions of these continuously breeding species were skewed heavily toward juveniles, more so than the detections obtained from general collecting. A strongly skewed size distribution is not well characterized by the mean or maximum, though those are the statistics routinely reported for species’ sizes. We found body mass to be distributed more symmetrically than was snout–vent length, providing an additional rationale for collecting and reporting that size measure.

  9. Development of a simplified optical technique for the simultaneous measurement of particle size distribution and velocity

    NASA Technical Reports Server (NTRS)

    Smith, J. L.

    1983-01-01

    Existing techniques were surveyed, an experimental procedure was developed, a laboratory test model was fabricated, limited data were recovered for proof of principle, and the relationship between particle size distribution and amplitude measurements was illustrated in an effort to develop a low cost, simplified optical technique for measuring particle size distributions and velocities in fluidized bed combustors and gasifiers. A He-Ne laser illuminated Rochi Rulings (range 10 to 500 lines per inch). Various samples of known particle size distributions were passed through the fringe pattern produced by the rulings. A photomultiplier tube converted light from the fringe volume to an electrical signal which was recorded using an oscilloscope and camera. The signal amplitudes were correlated against the known particle size distributions. The correlation holds true for various samples.

  10. Estimating the duration of geologic intervals from a small number of age determinations: A challenge common to petrology and paleobiology

    NASA Astrophysics Data System (ADS)

    Glazner, Allen F.; Sadler, Peter M.

    2016-12-01

    The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is n+1n-1. Systematic undersampling of interval lengths can have a large effect on calculated magma fluxes in plutonic systems. The problem is analogous to determining the duration of an extinct species from its fossil occurrences. Confidence interval statistics developed for species origination and extinction times are applicable to the onset and cessation of magmatic events.

  11. Adults' Memories of Childhood: True and False Reports

    ERIC Educational Resources Information Center

    Qin, Jianjian; Ogle, Christin M.; Goodman, Gail S.

    2008-01-01

    In 3 experiments, the authors examined factors that, according to the source-monitoring framework, might influence false memory formation and true/false memory discernment. In Experiment 1, combined effects of warning and visualization on false childhood memory formation were examined, as were individual differences in true and false childhood…

  12. A Mixed Effects Randomized Item Response Model

    ERIC Educational Resources Information Center

    Fox, J.-P.; Wyrick, Cheryl

    2008-01-01

    The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…

  13. The thresholds for statistical and clinical significance – a five-step procedure for evaluation of intervention effects in randomised clinical trials

    PubMed Central

    2014-01-01

    Background Thresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore, assessment of intervention effects in randomised clinical trials deserves more rigour in order to become more valid. Methods Several methodologies for assessing the statistical and clinical significance of intervention effects in randomised clinical trials were considered. Balancing simplicity and comprehensiveness, a simple five-step procedure was developed. Results For a more valid assessment of results from a randomised clinical trial we propose the following five-steps: (1) report the confidence intervals and the exact P-values; (2) report Bayes factor for the primary outcome, being the ratio of the probability that a given trial result is compatible with a ‘null’ effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance threshold if the trial is stopped early or if interim analyses have been conducted; (4) adjust the confidence intervals and the P-values for multiplicity due to number of outcome comparisons; and (5) assess clinical significance of the trial results. Conclusions If the proposed five-step procedure is followed, this may increase the validity of assessments of intervention effects in randomised clinical trials. PMID:24588900

  14. Annual survival estimation of migratory songbirds confounded by incomplete breeding site-fidelity: Study designs that may help

    USGS Publications Warehouse

    Marshall, M.R.; Diefenbach, D.R.; Wood, L.A.; Cooper, R.J.

    2004-01-01

    Many species of bird exhibit varying degrees of site-fidelity to the previous year's territory or breeding area, a phenomenon we refer to as incomplete breeding site-fidelity. If the territory they occupy is located beyond the bounds of the study area or search area (i.e., they have emigrated from the study area), the bird will go undetected and is therefore indistinguishable from dead individuals in capture-mark-recapture studies. Differential emigration rates confound inferences regarding differences in survival between sexes and among species if apparent survival rates are used as estimates of true survival. Moreover, the bias introduced by using apparent survival rates for true survival rates can have profound effects on the predictions of population persistence through time, source/sink dynamics, and other aspects of life-history theory. We investigated four study design and analysis approaches that result in apparent survival estimates that are closer to true survival estimates. Our motivation for this research stemmed from a multi-year capture-recapture study of Prothonotary Warblers (Protonotaria citrea) on multiple study plots within a larger landscape of suitable breeding habitat where substantial inter-annual movements of marked individuals among neighboring study plots was documented. We wished to quantify the effects of this type of movement on annual survival estimation. The first two study designs we investigated involved marking birds in a core area and resighting them in the core as well as an area surrounding the core. For the first of these two designs, we demonstrated that as the resighting area surrounding the core gets progressively larger, and more "emigrants" are resighted, apparent survival estimates begin to approximate true survival rates (bias < 0.01). However, given observed inter-annual movements of birds, it is likely to be logistically impractical to resight birds on sufficiently large surrounding areas to minimize bias. Therefore, as an alternative protocol, we analyzed the data with subsets of three progressively larger areas surrounding the core. The data subsets provided four estimates of apparent survival that asymptotically approached true survival. This study design and analytical approach is likely to be logistically feasible in field settings and yields estimates of true survival unbiased (bias < 0.03) by incomplete breeding site-fidelity over a range of inter-annual territory movement patterns. The third approach we investigated used a robust design data collection and analysis approach. This approach resulted in estimates of survival that were unbiased (bias < 0.02), but were very imprecise and likely would not yield reliable estimates in field situations. The fourth approach utilized a fixed study area size, but modeled detection probability as a function of bird proximity to the study plot boundary (e.g., those birds closest to the edge are more likely to emigrate). This approach also resulted in estimates of survival that were unbiased (bias < 0.02), but because the individual covariates were normalized, the average capture probability was 0.50, and thus did not provide an accurate estimate of the true capture probability. Our results show that the core-area with surrounding resight-only can provide estimates of survival that are not biased by the effects of incomplete breeding site-fidelity. ?? 2004 Museu de Cie??ncies Naturals.

  15. The effect of mood on false memory for emotional DRM word lists.

    PubMed

    Zhang, Weiwei; Gross, Julien; Hayne, Harlene

    2017-04-01

    In the present study, we investigated the effect of participants' mood on true and false memories of emotional word lists in the Deese-Roediger-McDermott (DRM) paradigm. In Experiment 1, we constructed DRM word lists in which all the studied words and corresponding critical lures reflected a specified emotional valence. In Experiment 2, we used these lists to assess mood-congruent true and false memory. Participants were randomly assigned to one of three induced-mood conditions (positive, negative, or neutral) and were presented with word lists comprised of positive, negative, or neutral words. For both true and false memory, there was a mood-congruent effect in the negative mood condition; this effect was due to a decrease in true and false recognition of the positive and neutral words. These findings are consistent with both spreading-activation and fuzzy-trace theories of DRM performance and have practical implications for our understanding of the effect of mood on memory.

  16. Evaluating the Effectiveness of Website Content Features Using Retrospective Pretest Methodology: An Experimental Test.

    PubMed

    Mueller, Christoph Emanuel

    2015-06-01

    In order to assess website content effectiveness (WCE), investigations have to be made into whether the reception of website contents leads to a change in the characteristics of website visitors or not. Because randomized controlled trials (RCTs) are not always the method of choice, researchers may have to follow other strategies such as using retrospective pretest methodology (RPM), a straightforward and easy-to-implement tool for estimating intervention effects. This article aims to introduce RPM in the context of website evaluation and test its viability under experimental conditions. Building on the idea that RCTs deliver unbiased estimates of the true causal effects of website content reception, I compared the performance of RPM with that of an RCT within the same study. Hence, if RPM provides effect estimates similar to those of the RCT, it can be considered a viable tool for assessing the effectiveness of the website content features under study. RPM was capable of delivering comparatively resilient estimates of the effects of a YouTube video and a text feature on knowledge and attitudes. With regard to all of the outcome variables considered, the differences between the sizes of the effects estimated by the RCT and RPM were not significant. Additionally, RPM delivered relatively accurate effect size estimates in most of the cases. Therefore, I conclude that RPM could be a viable alternative for assessing WCE in cases where RCTs are not the preferred method. © The Author(s) 2015.

  17. A validation of 11 body-condition indices in a giant snake species that exhibits positive allometry

    USGS Publications Warehouse

    Falk, Bryan; Snow, Ray W.; Reed, Robert N.

    2017-01-01

    Body condition is a gauge of the energy stores of an animal, and though it has important implications for fitness, survival, competition, and disease, it is difficult to measure directly. Instead, body condition is frequently estimated as a body condition index (BCI) using length and mass measurements. A desirable BCI should accurately reflect true body condition and be unbiased with respect to size (i.e., mean BCI estimates should not change across different length or mass ranges), and choosing the most-appropriate BCI is not straightforward. We evaluated 11 different BCIs in 248 Burmese pythons (Python bivittatus), organisms that, like other snakes, exhibit simple body plans well characterized by length and mass. We found that the length-mass relationship in Burmese pythons is positively allometric, where mass increases rapidly with respect to length, and this allowed us to explore the effects of allometry on BCI verification. We employed three alternative measures of ‘true’ body condition: percent fat, scaled fat, and residual fat. The latter two measures mostly accommodated allometry in true body condition, but percent fat did not. Our inferences of the best-performing BCIs depended heavily on our measure of true body condition, with most BCIs falling into one of two groups. The first group contained most BCIs based on ratios, and these were associated with percent fat and body length (i.e., were biased). The second group contained the scaled mass index and most of the BCIs based on linear regressions, and these were associated with both scaled and residual fat but not body length (i.e., were unbiased). Our results show that potential differences in measures of true body condition should be explored in BCI verification studies, particularly in organisms undergoing allometric growth. Furthermore, the caveats of each BCI and similarities to other BCIs are important to consider when determining which BCI is appropriate for any particular taxon.

  18. Skyrme density functional description of the double magic 78Ni nucleus

    NASA Astrophysics Data System (ADS)

    Brink, D. M.; Stancu, Fl.

    2018-06-01

    We calculate the single-particle spectrum of the double magic nucleus 78Ni in a Hartree-Fock approach using the Skyrme density-dependent effective interaction containing central, spin-orbit, and tensor parts. We show that the tensor part has an important effect on the spin-orbit splitting of the proton 1 f orbit that may explain the survival of magicity so far from the stability valley. We confirm the inversion of the 1 f 5 /2 and 2 p 3 /2 levels at the neutron number 48 in the Ni isotopic chain expected from previous Monte Carlo shell-model calculations and supported by experimental observation.

  19. 46 CFR 160.041-4 - Contents.

    Code of Federal Regulations, 2010 CFR

    2005-10-01

    ... 46 Shipping 6 2005-10-01 2004-10-01 true Contents. 160.041-4 Section 160.041-4 Shipping Coast...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Merchant Vessels § 160.041-4 Contents. (a... Recommendation R178-41, properly labeled to designate the name, size of contents, and method of use, and shall...

  20. The Use of Propensity Scores and Observational Data to Estimate Randomized Controlled Trial Generalizability Bias

    PubMed Central

    Pressler, Taylor R.; Kaizar, Eloise E.

    2014-01-01

    While randomized controlled trials (RCT) are considered the “gold standard” for clinical studies, the use of exclusion criteria may impact the external validity of the results. It is unknown whether estimators of effect size are biased by excluding a portion of the target population from enrollment. We propose to use observational data to estimate the bias due to enrollment restrictions, which we term generalizability bias. In this paper we introduce a class of estimators for the generalizability bias and use simulation to study its properties in the presence of non-constant treatment effects. We find the surprising result that our estimators can be unbiased for the true generalizability bias even when all potentially confounding variables are not measured. In addition, our proposed doubly robust estimator performs well even for mis-specified models. PMID:23553373

  1. Emotive hemispheric differences measured in real-life portraits using pupil diameter and subjective aesthetic preferences.

    PubMed

    Blackburn, Kelsey; Schirillo, James

    2012-06-01

    The biased positioning of faces exposed to viewers of Western portraiture has suggested there may be fundamental differences in the lateralized expression and perception of emotion. The present study investigates whether there are differences in the perception of the left and right sides of the face in real-life photographs of individuals. The study paired conscious aesthetic ratings of pleasantness with measurements of pupil size, which are thought to be a reliable unconscious measure of interest first tested by Hess. Images of 10 men and 10 women were taken from the left and right sides of the face. These images were also mirror-reversed. As expected, we found a strong preference for left-sided portraits (regardless of original or mirror-reversed orientation), such that left hemifaces elicited higher ratings and greater pupil dilation. Interestingly, this effect was true of both sexes. A positive linear relationship was also found between pupil size and aesthetic ratings such that pupil size increased with pleasantness ratings. These findings provide support for the notions of lateralized emotion, right-hemispheric dominance, pupillary dilation to pleasant images, and constriction to unpleasant images.

  2. Support vector regression to predict porosity and permeability: Effect of sample size

    NASA Astrophysics Data System (ADS)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.

  3. Delayed treatment with hypothermia protects against the no-reflow phenomenon despite failure to reduce infarct size.

    PubMed

    Hale, Sharon L; Herring, Michael J; Kloner, Robert A

    2013-01-04

    Many studies have shown that when hypothermia is started after coronary artery reperfusion (CAR), it is ineffective at reducing necrosis. However, some suggest that hypothermia may preferentially reduce no-reflow. Our aim was to test the effects of hypothermia on no-reflow when initiated close to reperfusion and 30 minutes after reperfusion, times not associated with a protective effect on myocardial infarct size. Rabbits received 30 minutes coronary artery occlusion/3 hours CAR. In protocol 1, hearts were treated for 1 hour with topical hypothermia (myocardial temperature ≈32°C) initiated at 5 minutes before or 5 minutes after CAR, and the results were compared with a normothermic group. In protocol 2, hypothermia was delayed until 30 minutes after CAR and control hearts remained normothermic. In protocol 1, risk zones were similar and infarct size was not significantly reduced by hypothermia initiated close to CAR. However, the no-reflow defect was significantly reduced by 43% (5 minutes before CAR) and 38% (5 minutes after CAR) in hypothermic compared with normothermic hearts (P=0.004, ANOVA, P=ns between the 2 treated groups). In protocol 2, risk zones and infarct sizes were similar, but delayed hypothermia significantly reduced no-reflow in hypothermic hearts by 30% (55±6% of the necrotic region in hypothermia group versus 79±6% with normothermia, P=0.008). These studies suggest that treatment with hypothermia reduces no-reflow even when initiated too late to reduce infarct size and that the microvasculature is especially receptive to the protective properties of hypothermia and confirm that microvascular damage is in large part a form of true reperfusion injury.

  4. CLUSTER DYNAMICS LARGELY SHAPES PROTOPLANETARY DISK SIZES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vincke, Kirsten; Pfalzner, Susanne, E-mail: kvincke@mpifr-bonn.mpg.de

    2016-09-01

    To what degree the cluster environment influences the sizes of protoplanetary disks surrounding young stars is still an open question. This is particularly true for the short-lived clusters typical for the solar neighborhood, in which the stellar density and therefore the influence of the cluster environment change considerably over the first 10 Myr. In previous studies, the effect of the gas on the cluster dynamics has often been neglected; this is remedied here. Using the code NBody6++, we study the stellar dynamics in different developmental phases—embedded, expulsion, and expansion—including the gas, and quantify the effect of fly-bys on the diskmore » size. We concentrate on massive clusters (M {sub cl} ≥ 10{sup 3}–6 ∗ 10{sup 4} M {sub Sun}), which are representative for clusters like the Orion Nebula Cluster (ONC) or NGC 6611. We find that not only the stellar density but also the duration of the embedded phase matters. The densest clusters react fastest to the gas expulsion and drop quickly in density, here 98% of relevant encounters happen before gas expulsion. By contrast, disks in sparser clusters are initially less affected, but because these clusters expand more slowly, 13% of disks are truncated after gas expulsion. For ONC-like clusters, we find that disks larger than 500 au are usually affected by the environment, which corresponds to the observation that 200 au-sized disks are common. For NGC 6611-like clusters, disk sizes are cut-down on average to roughly 100 au. A testable hypothesis would be that the disks in the center of NGC 6611 should be on average ≈20 au and therefore considerably smaller than those in the ONC.« less

  5. B-scan technique for localization and characterization of fatigue cracks around fastener holes in multi-layered structures

    NASA Astrophysics Data System (ADS)

    Hopkins, Deborah; Datuin, Marvin; Aldrin, John; Warchol, Mark; Warchol, Lyudmila; Forsyth, David

    2018-04-01

    The work presented here aims to develop and transition angled-beam shear-wave inspection techniques for crack localization at fastener sites in multi-layer aircraft structures. This requires moving beyond detection to achieve reliable crack location and size, thereby providing invaluable information for maintenance actions and service-life management. The technique presented is based on imaging cracks in "True" B-scans (depth view projected in the sheets along the beam path). The crack traces that contribute to localization in the True B-scans depend on small, diffracted signals from the crack edges and tips that are visible in simulations and experimental data acquired with sufficient gain. The most recent work shows that cracks rotated toward and away from the central ultrasonic beam also yield crack traces in True B-scans that allow localization in simulations, even for large obtuse angles where experimental and simulation results show very small or no indications in the C-scans. Similarly, for two sheets joined by sealant, simulations show that cracks in the second sheet can be located in True B-scans for all locations studied: cracks that intersect the front or back wall of the second sheet, as well as relatively small mid-bore cracks. These results are consistent with previous model verification and sensitivity studies that demonstrate crack localization in True B-scans for a single sheet and cracks perpendicular to the ultrasonic beam.

  6. On the road toward formal reasoning: reasoning with factual causal and contrary-to-fact causal premises during early adolescence.

    PubMed

    Markovits, Henry

    2014-12-01

    Understanding the development of conditional (if-then) reasoning is critical for theoretical and educational reasons. Here we examined the hypothesis that there is a developmental transition between reasoning with true and contrary-to-fact (CF) causal conditionals. A total of 535 students between 11 and 14 years of age received priming conditions designed to encourage use of either a true or CF alternatives generation strategy and reasoning problems with true causal and CF causal premises (with counterbalanced order). Results show that priming had no effect on reasoning with true causal premises. By contrast, priming with CF alternatives significantly improved logical reasoning with CF premises. Analysis of the effect of order showed that reasoning with CF premises reduced logical responding among younger students but had no effect among older students. Results support the idea that there is a transition in the reasoning processes in this age range associated with the nature of the alternatives generation process required for logical reasoning with true and CF causal conditionals. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. The power of the placebo.

    PubMed

    Eccles, Ron

    2007-05-01

    The placebo is much more than a control medicine in a clinical trial. The placebo response is the largest component of any allergy treatment and consists of two components: nonspecific effects (eg, natural recovery) and a "true placebo effect" that is the psychological therapeutic effect of the treatment. Belief in the beneficial nature of the treatment is a key component of the true placebo effect, and can be enhanced by factors such as interaction with the physician and the sensory impact of the treatment. Negative beliefs can generate a nocebo effect that may explain some psychogenic illnesses; this is the basis of much research in psychoneuroimmunology. An understanding of the placebo and nocebo effects is important for general allergy practice, and harnessing the power of the true placebo effect is a major challenge to modern medicine.

  8. Apparent and true resistant hypertension: definition, prevalence and outcomes

    PubMed Central

    Judd, E; Calhoun, DA

    2014-01-01

    Resistant hypertension, defined as blood pressure (BP) remaining above goal despite the use of ≥3 antihypertensive medications at maximally tolerated doses (one ideally being a diuretic) or BP that requires ≥4 agents to achieve control, has received more attention with increased efforts to improve BP control rates and the emergence of device-based therapies for hypertension. This classically defined resistant group consists of patients with true resistant hypertension, controlled resistant hypertension and pseudo-resistant hypertension. In studies where pseudo-resistant hypertension cannot be excluded (for example, 24-h ambulatory BP not obtained), the term apparent resistant hypertension has been used to identify ‘apparent’ lack of control on ≥3 medications. Large, well-designed studies have recently reported the prevalence of resistant hypertension. Pooling prevalence data from these studies and others within North America and Europe with a combined sample size of >600 000 hypertensive participants, the prevalence of resistant hypertension is 14.8% of treated hypertensive patients and 12.5% of all hypertensives. However, the prevalence of true resistant hypertension, defined as uncontrolled both by office and 24-h ambulatory BP monitoring with confirmed medication adherence, may be more meaningful in terms of identifying risk and estimating benefit from newer therapies like renal denervation. Rates of cardiovascular events and mortality follow mean 24-h ambulatory BPs in patients with resistant hypertension, and true resistant hypertension represents the highest risk. The prevalence of true resistant hypertension has not been directly measured in large trials; however, combined data from smaller studies suggest that true resistant hypertension is present in half of the patients with resistant hypertension who are uncontrolled in the office. Our pooled analysis shows prevalence rates of 10.1% and 7.9% for uncontrolled resistant hypertension among individuals treated for hypertension and all hypertensive individuals, respectively. PMID:24430707

  9. Dynamics of domain coverage of the protein sequence universe.

    PubMed

    Rekapalli, Bhanu; Wuichet, Kristin; Peterson, Gregory D; Zhulin, Igor B

    2012-11-16

    The currently known protein sequence space consists of millions of sequences in public databases and is rapidly expanding. Assigning sequences to families leads to a better understanding of protein function and the nature of the protein universe. However, a large portion of the current protein space remains unassigned and is referred to as its "dark matter". Here we suggest that true size of "dark matter" is much larger than stated by current definitions. We propose an approach to reducing the size of "dark matter" by identifying and subtracting regions in protein sequences that are not likely to contain any domain. Recent improvements in computational domain modeling result in a decrease, albeit slowly, in the relative size of "dark matter"; however, its absolute size increases substantially with the growth of sequence data.

  10. Simultaneous PET/MR imaging of the brain: feasibility of cerebral blood flow measurements with FAIR-TrueFISP arterial spin labeling MRI.

    PubMed

    Stegger, Lars; Martirosian, Petros; Schwenzer, Nina; Bisdas, Sotirios; Kolb, Armin; Pfannenberg, Christina; Claussen, Claus D; Pichler, Bernd; Schick, Fritz; Boss, Andreas

    2012-11-01

    Hybrid positron emission tomography/magnetic resonance imaging (PET/MRI) with simultaneous data acquisition promises a comprehensive evaluation of cerebral pathophysiology on a molecular, anatomical, and functional level. Considering the necessary changes to the MR scanner design the feasibility of arterial spin labeling (ASL) is unclear. To evaluate whether cerebral blood flow imaging with ASL is feasible using a prototype PET/MRI device. ASL imaging of the brain with Flow-sensitive Alternating Inversion Recovery (FAIR) spin preparation and true fast imaging in steady precession (TrueFISP) data readout was performed in eight healthy volunteers sequentially on a prototype PET/MRI and a stand-alone MR scanner with 128 × 128 and 192 × 192 matrix sizes. Cerebral blood flow values for gray matter, signal-to-noise and contrast-to-noise ratios, and relative signal change were compared. Additionally, the feasibility of ASL as part of a clinical hybrid PET/MRI protocol was demonstrated in five patients with intracerebral tumors. Blood flow maps showed good delineation of gray and white matter with no discernible artifacts. The mean blood flow values of the eight volunteers on the PET/MR system were 51 ± 9 and 51 ± 7 mL/100 g/min for the 128 × 128 and 192 × 192 matrices (stand-alone MR, 57 ± 2 and 55 ± 5, not significant). The value for signal-to-noise (SNR) was significantly higher for the PET/MRI system using the 192 × 192 matrix size (P < 0.01), the relative signal change (δS) was significantly lower for the 192 × 192 matrix size (P = 0.02). ASL imaging as part of a clinical hybrid PET/MRI protocol could successfully be accomplished in all patients in diagnostic image quality. ASL brain imaging is feasible with a prototype hybrid PET/MRI scanner, thus adding to the value of this novel imaging technique.

  11. A Novel True Triaxial Apparatus to Study the Geomechanical and Fluid Flow Aspects of Energy Exploitations in Geological Formations

    NASA Astrophysics Data System (ADS)

    Li, Minghui; Yin, Guangzhi; Xu, Jiang; Li, Wenpu; Song, Zhenlong; Jiang, Changbao

    2016-12-01

    Fluid-solid coupling investigations of the geological storage of CO2, efficient unconventional oil and natural gas exploitations are mostly conducted under conventional triaxial stress conditions ( σ 2 = σ 3), ignoring the effects of σ 2 on the geomechanical properties and permeability of rocks (shale, coal and sandstone). A novel multi-functional true triaxial geophysical (TTG) apparatus was designed, fabricated, calibrated and tested to simulate true triaxial stress ( σ 1 > σ 2 > σ 3) conditions and to reveal geomechanical properties and permeability evolutions of rocks. The apparatus was developed with the capacity to carry out geomechanical and fluid flow experiments at high three-dimensional loading forces and injection pressures under true triaxial stress conditions. The control and measurement of the fluid flow with effective sealing of rock specimen corners were achieved using a specially designed internally sealed fluid flow system. To validate that the apparatus works properly and to recognize the effects of each principal stress on rock deformation and permeability, stress-strain and permeability experiments and a hydraulic fracturing simulation experiment on shale specimens were conducted under true triaxial stress conditions using the TTG apparatus. Results show that the apparatus has advantages in recognizing the effects of σ 2 on the geomechanical properties and permeability of rocks. Results also demonstrate the effectiveness and reliability of the novel TTG apparatus. The apparatus provides a new method of studying the geomechanical properties and permeability evolutions of rocks under true triaxial stress conditions, promoting further investigations of the geological storage of CO2, efficient unconventional oil and gas exploitations.

  12. Parallel effects of memory set activation and search on timing and working memory capacity.

    PubMed

    Schweickert, Richard; Fortin, Claudette; Xi, Zhuangzhuang; Viau-Quesnel, Charles

    2014-01-01

    Accurately estimating a time interval is required in everyday activities such as driving or cooking. Estimating time is relatively easy, provided a person attends to it. But a brief shift of attention to another task usually interferes with timing. Most processes carried out concurrently with timing interfere with it. Curiously, some do not. Literature on a few processes suggests a general proposition, the Timing and Complex-Span Hypothesis: A process interferes with concurrent timing if and only if process performance is related to complex span. Complex-span is the number of items correctly recalled in order, when each item presented for study is followed by a brief activity. Literature on task switching, visual search, memory search, word generation and mental time travel supports the hypothesis. Previous work found that another process, activation of a memory set in long term memory, is not related to complex-span. If the Timing and Complex-Span Hypothesis is true, activation should not interfere with concurrent timing in dual-task conditions. We tested such activation in single-task memory search task conditions and in dual-task conditions where memory search was executed with concurrent timing. In Experiment 1, activating a memory set increased reaction time, with no significant effect on time production. In Experiment 2, set size and memory set activation were manipulated. Activation and set size had a puzzling interaction for time productions, perhaps due to difficult conditions, leading us to use a related but easier task in Experiment 3. In Experiment 3 increasing set size lengthened time production, but memory activation had no significant effect. Results here and in previous literature on the whole support the Timing and Complex-Span Hypotheses. Results also support a sequential organization of activation and search of memory. This organization predicts activation and set size have additive effects on reaction time and multiplicative effects on percent correct, which was found.

  13. Expading fluvial remote sensing to the riverscape: Mapping depth and grain size on the Merced River, California

    NASA Astrophysics Data System (ADS)

    Richardson, Ryan T.

    This study builds upon recent research in the field of fluvial remote sensing by applying techniques for mapping physical attributes of rivers. Depth, velocity, and grain size are primary controls on the types of habitat present in fluvial ecosystems. This thesis focuses on expanding fluvial remote sensing to larger spatial extents and sub-meter resolutions, which will increase our ability to capture the spatial heterogeneity of habitat at a resolution relevant to individual salmonids and an extent relevant to species. This thesis consists of two chapters, one focusing on expanding the spatial extent over which depth can be mapped using Optimal Band Ratio Analysis (OBRA) and the other developing general relations for mapping grain size from three-dimensional topographic point clouds. The two chapters are independent but connected by the overarching goal of providing scientists and managers more useful tools for quantifying the amount and quality of salmonid habitat via remote sensing. The OBRA chapter highlights the true power of remote sensing to map depths from hyperspectral images as a central component of watershed scale analysis, while also acknowledging the great challenges involved with increasing spatial extent. The grain size mapping chapter establishes the first general relations for mapping grain size from roughness using point clouds. These relations will significantly reduce the time needed in the field by eliminating the need for independent measurements of grain size for calibrating the roughness-grain size relationship and thus making grain size mapping with SFM more cost effective for river restoration and monitoring. More data from future studies are needed to refine these relations and establish their validity and generality. In conclusion, this study adds to the rapidly growing field of fluvial remote sensing and could facilitate river research and restoration.

  14. The Effect of Size of Red Cells on the Kinetics of Their Oxygen Uptake

    PubMed Central

    Holland, R. A. B.; Forster, R. E.

    1966-01-01

    Using a double-beam stopped-flow apparatus estimations were made of the velocity constant for the initial uptake of oxygen by fully reduced erythrocytes (k'c). Mammalian cells were studied with volumes varying from 20 µ3 (goat) to 90 µ3 (man), as were bullfrog cells (680 µ3). Measurements were made under physiological conditions of pH, P CO2, and temperature. In man k'c was 80 mM -1 sec-1 and in other species smaller cells generally had a greater value for k'c than did the larger cells. In the goat it was 1.8 times as great as the human value; in the bullfrog it was only one-fifth as great. These differences could not be accounted for by interspecific differences in hemoglobin kinetics. The differences probably represent a true effect of size conferring some biological advantage on the species with the smaller cells. The cell membrane offered resistance to oxygen passage. Using the usual red cell model of an infinite sheet of reduced hemoglobin, membrane permeability appeared to differ among mammals. If, as is likely, the effective cell halfthickness differs among mammals, actual membrane permeability differences may be less. A method for measurement of oxygen saturation of dilute cell suspensions is also described. PMID:5943611

  15. Use of Non-invasive Uterine Electromyography in the Diagnosis of Preterm Labour

    PubMed Central

    Lucovnik, M.; Novak-Antolic, Z.; Garfield, R.E.

    2012-01-01

    Predictive values of methods currently used in the clinics to diagnose preterm labour are low. This leads to missed opportunities to improve neonatal outcomes and, on the other hand, to unnecessary hospitalizations and treatments. In addition, research of new and potentially more effective preterm labour treatments is hindered by the inability to include only patients in true preterm labour into studies. Uterine electromyography (EMG) detects changes in cell excitability and coupling required for labour and has higher predictive values for preterm delivery than currently available methods. This methodology could also provide a better means to evaluate various therapeutic interventions for preterm labour. Our manuscript presents a review of uterine EMG studies examining the potential clinical value that this technology possesses over what is available to physicians currently. We also evaluated the impact that uterine EMG could have on investigation of preterm labour treatments by calculating sample sizes for studies using EMG vs. current methods to enrol women. Besides helping clinicians to make safer and more cost-effective decisions when managing patients with preterm contractions, implementation of uterine EMG for diagnosis of preterm labour would also greatly reduce sample sizes required for studies of treatments. PMID:24753891

  16. FDR doesn't Tell the Whole Story: Joint Influence of Effect Size and Covariance Structure on the Distribution of the False Discovery Proportions

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Ploutz-Snyder, Robert; Fiedler, James

    2011-01-01

    As part of a 2009 Annals of Statistics paper, Gavrilov, Benjamini, and Sarkar report results of simulations that estimated the false discovery rate (FDR) for equally correlated test statistics using a well-known multiple-test procedure. In our study we estimate the distribution of the false discovery proportion (FDP) for the same procedure under a variety of correlation structures among multiple dependent variables in a MANOVA context. Specifically, we study the mean (the FDR), skewness, kurtosis, and percentiles of the FDP distribution in the case of multiple comparisons that give rise to correlated non-central t-statistics when results at several time periods are being compared to baseline. Even if the FDR achieves its nominal value, other aspects of the distribution of the FDP depend on the interaction between signed effect sizes and correlations among variables, proportion of true nulls, and number of dependent variables. We show examples where the mean FDP (the FDR) is 10% as designed, yet there is a surprising probability of having 30% or more false discoveries. Thus, in a real experiment, the proportion of false discoveries could be quite different from the stipulated FDR.

  17. Discovery of a monophagous true predator, a specialist termite-eating spider (Araneae: Ammoxenidae)

    PubMed Central

    Petráková, Lenka; Líznarová, Eva; Pekár, Stano; Haddad, Charles R.; Sentenská, Lenka; Symondson, William O. C.

    2015-01-01

    True predators are characterised by capturing a number of prey items during their lifetime and by being generalists. Some true predators are facultative specialists, but very few species are stenophagous specialists that catch only a few closely related prey types. A monophagous true predator that would exploit a single prey species has not been discovered yet. Representatives of the spider family Ammoxenidae have been reported to have evolved to only catch termites. Here we tested the hypothesis that Ammoxenus amphalodes is a monophagous termite-eater capturing only Hodotermes mossambicus. We studied the trophic niche of A. amphalodes by means of molecular analysis of the gut contents using Next Generation Sequencing. We investigated their willingness to accept alternative prey and observed their specific predatory behaviour and prey capture efficiency. We found all of the 1.4 million sequences were H. mossambicus. In the laboratory A. amphalodes did not accept any other prey, including other termite species. The spiders attacked the lateral side of the thorax of termites and immobilised them within 1 min. The paralysis efficiency was independent of predator:prey size ratio. The results strongly indicate that A. amphalodes is a monophagous prey specialist, specifically adapted to feed on H. mossambicus. PMID:26359085

  18. Brucella Antibodies in Alaskan True Seals and Eared Seals-Two Different Stories.

    PubMed

    Nymo, Ingebjørg H; Rødven, Rolf; Beckmen, Kimberlee; Larsen, Anett K; Tryland, Morten; Quakenbush, Lori; Godfroid, Jacques

    2018-01-01

    Brucella pinnipedialis was first isolated from true seals in 1994 and from eared seals in 2008. Although few pathological findings have been associated with infection in true seals, reproductive pathology including abortions, and the isolation of the zoonotic strain type 27 have been documented in eared seals. In this study, a Brucella enzyme-linked immunosorbent assay (ELISA) and the Rose Bengal test (RBT) were initially compared for 206 serum samples and a discrepancy between the tests was found. Following removal of lipids from the serum samples, ELISA results were unaltered while the agreement between the tests was improved, indicating that serum lipids affected the initial RBT outcome. For the remaining screening, we used ELISA to investigate the presence of Brucella antibodies in sera of 231 eared and 1,412 true seals from Alaskan waters sampled between 1975 and 2011. In eared seals, Brucella antibodies were found in two Steller sea lions ( Eumetopias jubatus ) (2%) and none of the 107 Northern fur seals ( Callorhinus ursinus ). The low seroprevalence in eared seals indicate a low level of exposure or lack of susceptibility to infection. Alternatively, mortality due to the Brucella infection may remove seropositive animals from the population. Brucella antibodies were detected in all true seal species investigated; harbor seals ( Phoca vitulina ) (25%), spotted seals ( Phoca largha ) (19%), ribbon seals ( Histriophoca fasciata ) (16%), and ringed seals ( Pusa hispida hispida ) (14%). There was a low seroprevalence among pups, a higher seroprevalence among juveniles, and a subsequent decreasing probability of seropositivity with age in harbor seals. Similar patterns were present for the other true seal species; however, solid conclusions could not be made due to sample size. This pattern is in accordance with previous reports on B. pinnipedialis infections in true seals and may suggest environmental exposure to B. pinnipedialis at the juvenile stage, with a following clearance of infection. Furthermore, analyses by region showed minor differences in the probability of being seropositive for harbor seals from different regions regardless of the local seal population trend, signifying that the Brucella infection may not cause significant mortality in these populations. In conclusion, the Brucella infection pattern is very different for eared and true seals.

  19. Evaluation of statistical treatments of left-censored environmental data using coincident uncensored data sets. II. Group comparisons

    USGS Publications Warehouse

    Antweiler, Ronald C.

    2015-01-01

    The main classes of statistical treatments that have been used to determine if two groups of censored environmental data arise from the same distribution are substitution methods, maximum likelihood (MLE) techniques, and nonparametric methods. These treatments along with using all instrument-generated data (IN), even those less than the detection limit, were evaluated by examining 550 data sets in which the true values of the censored data were known, and therefore “true” probabilities could be calculated and used as a yardstick for comparison. It was found that technique “quality” was strongly dependent on the degree of censoring present in the groups. For low degrees of censoring (<25% in each group), the Generalized Wilcoxon (GW) technique and substitution of √2/2 times the detection limit gave overall the best results. For moderate degrees of censoring, MLE worked best, but only if the distribution could be estimated to be normal or log-normal prior to its application; otherwise, GW was a suitable alternative. For higher degrees of censoring (each group >40% censoring), no technique provided reliable estimates of the true probability. Group size did not appear to influence the quality of the result, and no technique appeared to become better or worse than other techniques relative to group size. Finally, IN appeared to do very well relative to the other techniques regardless of censoring or group size.

  20. How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.

    PubMed

    Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J

    2014-09-01

    Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  1. Cluster-level statistical inference in fMRI datasets: The unexpected behavior of random fields in high dimensions.

    PubMed

    Bansal, Ravi; Peterson, Bradley S

    2018-06-01

    Identifying regional effects of interest in MRI datasets usually entails testing a priori hypotheses across many thousands of brain voxels, requiring control for false positive findings in these multiple hypotheses testing. Recent studies have suggested that parametric statistical methods may have incorrectly modeled functional MRI data, thereby leading to higher false positive rates than their nominal rates. Nonparametric methods for statistical inference when conducting multiple statistical tests, in contrast, are thought to produce false positives at the nominal rate, which has thus led to the suggestion that previously reported studies should reanalyze their fMRI data using nonparametric tools. To understand better why parametric methods may yield excessive false positives, we assessed their performance when applied both to simulated datasets of 1D, 2D, and 3D Gaussian Random Fields (GRFs) and to 710 real-world, resting-state fMRI datasets. We showed that both the simulated 2D and 3D GRFs and the real-world data contain a small percentage (<6%) of very large clusters (on average 60 times larger than the average cluster size), which were not present in 1D GRFs. These unexpectedly large clusters were deemed statistically significant using parametric methods, leading to empirical familywise error rates (FWERs) as high as 65%: the high empirical FWERs were not a consequence of parametric methods failing to model spatial smoothness accurately, but rather of these very large clusters that are inherently present in smooth, high-dimensional random fields. In fact, when discounting these very large clusters, the empirical FWER for parametric methods was 3.24%. Furthermore, even an empirical FWER of 65% would yield on average less than one of those very large clusters in each brain-wide analysis. Nonparametric methods, in contrast, estimated distributions from those large clusters, and therefore, by construct rejected the large clusters as false positives at the nominal FWERs. Those rejected clusters were outlying values in the distribution of cluster size but cannot be distinguished from true positive findings without further analyses, including assessing whether fMRI signal in those regions correlates with other clinical, behavioral, or cognitive measures. Rejecting the large clusters, however, significantly reduced the statistical power of nonparametric methods in detecting true findings compared with parametric methods, which would have detected most true findings that are essential for making valid biological inferences in MRI data. Parametric analyses, in contrast, detected most true findings while generating relatively few false positives: on average, less than one of those very large clusters would be deemed a true finding in each brain-wide analysis. We therefore recommend the continued use of parametric methods that model nonstationary smoothness for cluster-level, familywise control of false positives, particularly when using a Cluster Defining Threshold of 2.5 or higher, and subsequently assessing rigorously the biological plausibility of the findings, even for large clusters. Finally, because nonparametric methods yielded a large reduction in statistical power to detect true positive findings, we conclude that the modest reduction in false positive findings that nonparametric analyses afford does not warrant a re-analysis of previously published fMRI studies using nonparametric techniques. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Impact of geometrical properties on permeability and fluid phase distribution in porous media

    NASA Astrophysics Data System (ADS)

    Lehmann, P.; Berchtold, M.; Ahrenholz, B.; Tölke, J.; Kaestner, A.; Krafczyk, M.; Flühler, H.; Künsch, H. R.

    2008-09-01

    To predict fluid phase distribution in porous media, the effect of geometric properties on flow processes must be understood. In this study, we analyze the effect of volume, surface, curvature and connectivity (the four Minkowski functionals) on the hydraulic conductivity and the water retention curve. For that purpose, we generated 12 artificial structures with 800 3 voxels (the units of a 3D image) and compared them with a scanned sand sample of the same size. The structures were generated with a Boolean model based on a random distribution of overlapping ellipsoids whose size and shape were chosen to fulfill the criteria of the measured functionals. The pore structure of sand material was mapped with X-rays from synchrotrons. To analyze the effect of geometry on water flow and fluid distribution we carried out three types of analysis: Firstly, we computed geometrical properties like chord length, distance from the solids, pore size distribution and the Minkowski functionals as a function of pore size. Secondly, the fluid phase distribution as a function of the applied pressure was calculated with a morphological pore network model. Thirdly, the permeability was determined using a state-of-the-art lattice-Boltzmann method. For the simulated structure with the true Minkowski functionals the pores were larger and the computed air-entry value of the artificial medium was reduced to 85% of the value obtained from the scanned sample. The computed permeability for the geometry with the four fitted Minkowski functionals was equal to the permeability of the scanned image. The permeability was much more sensitive to the volume and surface than to curvature and connectivity of the medium. We conclude that the Minkowski functionals are not sufficient to characterize the geometrical properties of a porous structure that are relevant for the distribution of two fluid phases. Depending on the procedure to generate artificial structures with predefined Minkowski functionals, structures differing in pore size distribution can be obtained.

  3. Nearby Exo-Earth Astrometric Telescope (NEAT)

    NASA Technical Reports Server (NTRS)

    Shao, M.; Nemati, B.; Zhai, C.; Goullioud, R.

    2011-01-01

    NEAT (Nearby Exo ]Earths Astrometric Telescope) is a modest sized (1m diameter telescope) It will be capable of searching approx 100 nearby stars down to 1 Mearth planets in the habitable zone, and 200 @ 5 Mearth, 1AU. The concept addresses the major issues for ultra -precise astrometry: (1) Photon noise (0.5 deg dia field of view) (2) Optical errors (beam walk) with long focal length telescope (3) Focal plane errors , with laser metrology of the focal plane (4) PSF centroiding errors with measurement of the "True" PSF instead of using a "guess " of the true PSF, and correction for intra pixel QE non-uniformities. Technology "close" to complete. Focal plane geometry to 2e-5 pixels and centroiding to approx 4e -5 pixels.

  4. Framework for adaptive multiscale analysis of nonhomogeneous point processes.

    PubMed

    Helgason, Hannes; Bartroff, Jay; Abry, Patrice

    2011-01-01

    We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.

  5. Vacuum-assisted breast biopsy with 7-gauge, 8-gauge, 9-gauge, 10-gauge, and 11-gauge needles: how many specimens are necessary?

    PubMed

    Preibsch, Heike; Baur, Astrid; Wietek, Beate M; Krämer, Bernhard; Staebler, Annette; Claussen, Claus D; Siegmann-Luz, Katja C

    2015-09-01

    Published national and international guidelines and consensus meetings on the use of vacuum-assisted biopsy (VAB) give different recommendations regarding the required numbers of tissue specimens depending on needle size and imaging method. To evaluate the weights of specimens obtained with different VAB needles to facilitate the translation of the required number of specimens between different breast biopsy systems and needle sizes, respectively. Five different VAB systems and seven different needle sizes were used: Mammotome® (11-gauge (G), 8-G), Vacora® (10-G), ATEC Sapphire™ (9-G), 8-G Mammotome® Revolve™, and EnCor Enspire® (10-G, 7-G). We took 24 (11-G) or 20 (7-10-G) tissue cores from a turkey breast phantom. The mean weight of a single tissue core was calculated for each needle size. A matrix, which allows the translation of the required number of tissue cores for different needle sizes, was generated. Results were compared to the true cumulative tissue weights of consecutively harvested tissue cores. The mean tissue weights obtained with the 11-G / 10-G Vacora® / 10-G Enspire® / 9-G / 8-G Original / 8-G Revolve™ / 7-G needles were 0.084 g / 0.142 g / 0.221 g / 0.121 g / 0.192 g / 0.334 g / 0.363 g, respectively. The calculated required numbers of VAB tissue cores for each needle size build the matrix. For example, the minimum calculated number of required cores according to the current German S3 guideline is 20 / 12 / 8 / 14 / 9 / 5 / 5 for needles of 11-G / 10-G Vacora® / 10-G Enspire® / 9-G / 8-G Original / 8-G Revolve™ / 7-G size. These numbers agree with the true cumulative tissue weights. The presented matrix facilitates the translation of the required number of VAB specimens between different needle sizes and thereby eases the implementation of current guidelines and consensus recommendations into clinical practice. © The Foundation Acta Radiologica 2014.

  6. Mood-congruent true and false memory: effects of depression.

    PubMed

    Howe, Mark L; Malone, Catherine

    2011-02-01

    The Deese/Roediger-McDermott paradigm was used to investigate the effect of depression on true and false recognition. In this experiment true and false recognition was examined across positive, neutral, negative, and depression-relevant lists for individuals with and without a diagnosis of major depressive disorder. Results showed that participants with major depressive disorder falsely recognised significantly more depression-relevant words than non-depressed controls. These findings also parallel recent research using recall instead of recognition and show that there are clear mood congruence effects for depression on false memory performance. © 2011 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business

  7. Solar Maps Development: How the Maps Were Made | Geospatial Data Science |

    Science.gov Websites

    10% of a true measured value within the grid cell. Due to terrain effects and other microclimate effects and other microclimate influences, the local cloud cover can vary significantly even within a approximately 10% of a true measured value within the grid cell. Due to terrain effects and other microclimate

  8. Growth of plants fumigated with saturated and unsaturated hydrocarbon gases and their derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heck, W.W.; Pires, E.G.

    1962-01-01

    Fourteen gases were investigated for their toxicity to plant growth and development. Five of these gases (acetylene, ethylene, ethylene oxide, propylene and vinyl chloride) produced pronounced effects on the five plant species studied. The plants were fumigated at 10, 100 and 1000 ppm by each of the test gases, using a set of 10 small fumigation chambers. The effects of the five gases on squash, cotton, corn, soybean and cowpea were carefully catalogued. Both quantitative and qualitative growth data were obtained. Plant height, leaf size, flower bud number, cotyledon injury and an injury index are useful criteria for analysis ofmore » gas effects. Cowpea is the most sensitive of the plants studied, followed by cotton, squash, soybean and corn. The injurious effects of ethylene were the greatest, followed by acetylene, propylene, ethylene oxide and vinyl chloride. It is suggested that ethylene oxide acts as a true toxicant while the other four gases may be considered as physiologically active gases.« less

  9. Implications of Weak Link Effects on Thermal Characteristics of Transition-Edge Sensors

    NASA Technical Reports Server (NTRS)

    Bailey, C. N.; Adams, J. S.; Bandler, S. R.; Brekosky, R. P.; Chevenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Kelley, R. L.; Kally, D. P.; Kilbourne, C. A.; hide

    2012-01-01

    Weak link behavior in transition-edge sensor (TES) microcalorimeters creates the need for a more careful characterization of a device's thermal characteristics through its transition. This is particularly true for small TESs where a small change in the bias current results in large changes in effective transition temperature. To correctly interpret measurements, especially complex impedance, it is crucial to know the temperature-dependent thermal conductance, G(T), and heat capacity, C(T), at each point through the transition. We present data illustrating these effects and discuss how we overcome the challenges that are present in accurately determining G and T from I-V curves. We also show how these weak link effects vary wi.th TES size. Additionally, we use this improVed understanding of G(T) to determine that, for these TES microcalorimeters. Kaptiza boundary resistance dominates the G of devices with absorbers while the electron-phonon coupling also needs to be considered when determining G for devices without absorbers

  10. Edge and area effects on the occurrence of migrant forest songbirds

    USGS Publications Warehouse

    Parker, T.H.; Stansberry, B.M.; Becker, C.D.; Gipson, P.S.

    2005-01-01

    Concerns about forest fragmentation and its conservation implications have motivated numerous studies that investigate the influence of forest patch area and forest edge on songbird distribution patterns. The generalized effects of forest patch size and forest edge on animal distributions is still debatable because forest patch size and forest edge are often confounded and because of an incomplete synthesis of available data. To fill a portion of this gap, we incorporated all available published data (33 papers) in meta-analyses of forest edge and area effects on site occupancy patterns for 26 Neotropical migrant forest-nesting songbirds in eastern North America. All reported area effects are confounded or potentially confounded by edge effects, and we refer to these as "confounded" studies. The converse, however, is not true and most reported edge effects are independent of patch area. When considering only nonconfounded studies of edge effects, only 1 of 17 species showed significant edge avoidance and 3 had significant affinity for edges. In confounded studies, 12 of 22 species showed significant avoidance of small patches and edges, and 1 had an affinity for small patches and edges. Furthermore, average effect sizes averaged across studies or species tended to be higher for confounded studies than for edge studies. We discuss three possible reasons for differences in results between these two groups of studies. First, studies of edge effects tended to be carried out in landscapes with greater forest cover than studies of confounded effects; among confounded effects studies, as forest cover increased, we observed a nonsignificant trend towards decreasing strength of small patch or edge avoidance effects. Thus, the weaker effects in edge studies may be due to the fact that these studies were conducted in forest-dominated landscapes. Second, we may have detected strong effects only in confounded studies because area effects are much stronger than edge effects on bird occurrence, and area effects drive the results in confounded studies. Third, edge and area effects may interact in such a way that edge effects become more important as forest patch size decreases; thus, both edge and area effects are responsible for results in confounded studies. These three explanations cannot be adequately separated with existing data. Regardless, it is clear that fragmentation of forests into small patches is detrimental to many migrant songbird species. ??2005 Society for Conservation Biology.

  11. Conceptual basis for prescriptive growth standards from conception to early childhood: present and future.

    PubMed

    Uauy, R; Casanello, P; Krause, B; Kuzanovic, J P; Corvalan, C

    2013-09-01

    Healthy growth in utero and after birth is fundamental for lifelong health and wellbeing. The World Health Organization (WHO) recently published standards for healthy growth from birth to 6 years of age; analogous standards for healthy fetal growth are not currently available. Current fetal growth charts in use are not true standards, since they are based on cross-sectional measurements of attained size under conditions that do not accurately reflect normal growth. In most cases, the pregnant populations and environments studied are far from ideal; thus the data are unlikely to reflect optimal fetal growth. A true standard should reflect how fetuses and newborns 'should' grow under ideal environmental conditions. The development of prescriptive intrauterine and newborn growth standards derived from the INTERGROWTH-21(st) Project provides the data that will allow us for the first time to establish what is 'normal' fetal growth. The INTERGROWTH-21(st) study centres provide the data set obtained under pre-established standardised criteria, and details of the methods used are also published. Multicentre study with sites in all major geographical regions of the world using a standard evaluation protocol. These standards will assess risk of abnormal size at birth and serve to evaluate potentially effective interventions to promote optimal growth beyond securing survival. The new normative standards have the potential to impact perinatal and neonatal survival and beyond, particularly in developing countries where fetal growth restriction is most prevalent. They will help us identify intrauterine growth restriction at earlier stages of development, when preventive or corrective strategies might be more effective than at present. These growth standards will take us one step closer to effective action in preventing and potentially reversing abnormal intrauterine growth. Achieving 'optimal' fetal growth requires that we act not only during pregnancy but that we optimize the maternal uterine environment from the time before conception, through embryonic development until fetal growth is complete. The remaining challenge is how 'early' will we be able to act, now that we can better monitor fetal growth. © 2013 The Authors BJOG An International Journal of Obstetrics and Gynaecology © 2013 RCOG.

  12. From Out Here: Valuing our Rural Colleges

    ERIC Educational Resources Information Center

    Geller, Jack M.

    2004-01-01

    There is little doubt that having a local college or university in your community is a wonderful local and regional asset. However, the true size and scope of its economic value and impact on the community can vary greatly. Is your community just thankful that as a large employer the college provides a large and stable payroll? Is the college…

  13. Institute for Science and Engineering Simulation (ISES)

    DTIC Science & Technology

    2015-12-18

    performance and other functionalities such as electrical , magnetic, optical, thermal, biological, chemical, and so forth. Structural integrity...transmission electron microscopy (HRSTEM) and three-dimensional atom probe (3DAP) tomography , the true atomic scale structure and change in chemical...atom probe tomography (3DAP) techniques, has permitted characterizing and quantifying the multimodal size distribution of different generations of γ

  14. Bayesian Power Prior Analysis and Its Application to Operational Risk and Rasch Model

    ERIC Educational Resources Information Center

    Zhang, Honglian

    2010-01-01

    When sample size is small, informative priors can be valuable in increasing the precision of estimates. Pooling historical data and current data with equal weights under the assumption that both of them are from the same population may be misleading when heterogeneity exists between historical data and current data. This is particularly true when…

  15. A new model for bed load sampler calibration to replace the probability-matching method

    Treesearch

    Robert B. Thomas; Jack Lewis

    1993-01-01

    In 1977 extensive data were collected to calibrate six Helley-Smith bed load samplers with four sediment particle sizes in a flume at the St. Anthony Falls Hydraulic Laboratory at the University of Minnesota. Because sampler data cannot be collected at the same time and place as ""true"" trap measurements, the ""probability-matching...

  16. How to Develop Learners Who Are Consistently Curious and Questioning

    ERIC Educational Resources Information Center

    Scurry, Jamie E.; Wilburn, Ariel; Villagomez, Alex; McCarthy, Mike

    2010-01-01

    In a society that reaches for silver-bullet solutions, higher education is not immune from widespread attempts to raise graduation rates through scaling one-size-fits-all models at lower costs. Yet people at Big Picture Learning believe any true, long-term solution that will produce more graduates with high-quality degrees must be…

  17. Five instruments for measuring tree height: an evaluation

    Treesearch

    Michael S. Williams; William A. Bechtold; V.J. LaBau

    1994-01-01

    Five instruments were tested for reliability in measuring tree heights under realistic conditions. Four linear models were used to determine if tree height can be measured unbiasedly over all tree sizes and if any of the instruments were more efficient in estimating tree height. The laser height finder was the only instrument to produce unbiased estimates of the true...

  18. 26 CFR 48.4073-2 - Exemption of tires with internal wire fastening.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 16 2010-04-01 2010-04-01 true Exemption of tires with internal wire fastening... internal wire fastening. The tax does not apply to sales of tires of any size or dimension manufactured from extruded tiring that is fastened or held together by means of internal wire or other metallic...

  19. Diagnosis of the "large medial meniscus" of the knee on MR imaging.

    PubMed

    Samoto, Nobuhiko; Kozuma, Masakazu; Tokuhisa, Toshio; Kobayashi, Kunio

    2006-11-01

    Although several quantitative magnetic resonance (MR) diagnostic criteria for discoid lateral meniscus (DLM) have been described, there are no criteria by which to estimate the size of the medial meniscus. We define a medial meniscus that exceeds the normal size as a "large medial meniscus" (LMM), and the purpose of this study is to establish the quantitative MR diagnostic criteria for LMM. The MR imaging findings of 96 knees with arthroscopically confirmed intact semilunar lateral meniscus (SLM), 18 knees with intact DLM, 105 knees with intact semilunar medial meniscus (SMM) and 4 knees with torn LMM were analyzed. The following three quantitative parameters were measured: (a) meniscal width (MW): the minimum MW on the coronal slice; (b) ratio of the meniscus to the tibia (RMT): the ratio of minimum MW to maximum tibial width on the coronal slice; (c) continuity of the anterior and posterior horns (CAPH): the number of consecutive 5-mm-thick sagittal slices showing continuity between the anterior horn and the posterior horn of the meniscus on sagittal slices. Using logistic discriminant analysis between intact SLM and DLM groups and using descriptive statistics of intact SLM and SMM groups, the cutoff values used to discriminate LMM from SMM were calculated by MW and RMT. Moreover, the efficacy of these cutoff values and three slices of the cutoff values for CAPH were estimated in the medial meniscus group. "MW> or =11 mm" and "RMT> or =15%" were determined to be effective diagnostic criteria for LMM, while three of four cases in the torn LMM group were true positives and specificity was 99% in both criteria. When "CAPH> or =3 slices" was used as a criterion, three of four torn LMM cases were true positives and specificity was 93%.

  20. DECISION-MAKING ALIGNED WITH RAPID-CYCLE EVALUATION IN HEALTH CARE.

    PubMed

    Schneeweiss, Sebastian; Shrank, William H; Ruhl, Michael; Maclure, Malcolm

    2015-01-01

    Availability of real-time electronic healthcare data provides new opportunities for rapid-cycle evaluation (RCE) of health technologies, including healthcare delivery and payment programs. We aim to align decision-making processes with stages of RCE to optimize the usefulness and impact of rapid results. Rational decisions about program adoption depend on program effect size in relation to externalities, including implementation cost, sustainability, and likelihood of broad adoption. Drawing on case studies and experience from drug safety monitoring, we examine how decision makers have used scientific evidence on complex interventions in the past. We clarify how RCE alters the nature of policy decisions; develop the RAPID framework for synchronizing decision-maker activities with stages of RCE; and provide guidelines on evidence thresholds for incremental decision-making. In contrast to traditional evaluations, RCE provides early evidence on effectiveness and facilitates a stepped approach to decision making in expectation of future regularly updated evidence. RCE allows for identification of trends in adjusted effect size. It supports adapting a program in midstream in response to interim findings, or adapting the evaluation strategy to identify true improvements earlier. The 5-step RAPID approach that utilizes the cumulating evidence of program effectiveness over time could increase policy-makers' confidence in expediting decisions. RCE enables a step-wise approach to HTA decision-making, based on gradually emerging evidence, reducing delays in decision-making processes after traditional one-time evaluations.

  1. Analysis of point source size on measurement accuracy of lateral point-spread function of confocal Raman microscopy

    NASA Astrophysics Data System (ADS)

    Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang

    2018-01-01

    Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.

  2. Noninvasive genetics provides insights into the population size and genetic diversity of an Amur tiger population in China.

    PubMed

    Wang, Dan; Hu, Yibo; Ma, Tianxiao; Nie, Yonggang; Xie, Yan; Wei, Fuwen

    2016-01-01

    Understanding population size and genetic diversity is critical for effective conservation of endangered species. The Amur tiger (Panthera tigris altaica) is the largest felid and a flagship species for wildlife conservation. Due to habitat loss and human activities, available habitat and population size are continuously shrinking. However, little is known about the true population size and genetic diversity of wild tiger populations in China. In this study, we collected 55 fecal samples and 1 hair sample to investigate the population size and genetic diversity of wild Amur tigers in Hunchun National Nature Reserve, Jilin Province, China. From the samples, we determined that 23 fecal samples and 1 hair sample were from 7 Amur tigers: 2 males, 4 females and 1 individual of unknown sex. Interestingly, 2 fecal samples that were presumed to be from tigers were from Amur leopards, highlighting the significant advantages of noninvasive genetics over traditional methods in studying rare and elusive animals. Analyses from this sample suggested that the genetic diversity of wild Amur tigers is much lower than that of Bengal tigers, consistent with previous findings. Furthermore, the genetic diversity of this Hunchun population in China was lower than that of the adjoining subpopulation in southwest Primorye Russia, likely due to sampling bias. Considering the small population size and relatively low genetic diversity, it is urgent to protect this endangered local subpopulation in China. © 2015 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  3. Tests of ecogeographical relationships in a non-native species: what rules avian morphology?

    PubMed

    Cardilini, Adam P A; Buchanan, Katherine L; Sherman, Craig D H; Cassey, Phillip; Symonds, Matthew R E

    2016-07-01

    The capacity of non-native species to undergo rapid adaptive change provides opportunities to research contemporary evolution through natural experiments. This capacity is particularly true when considering ecogeographical rules, to which non-native species have been shown to conform within relatively short periods of time. Ecogeographical rules explain predictable spatial patterns of morphology, physiology, life history and behaviour. We tested whether Australian populations of non-native starling, Sturnus vulgaris, introduced to the country approximately 150 years ago, exhibited predicted environmental clines in body size, appendage size and heart size (Bergmann's, Allen's and Hesse's rules, respectively). Adult starlings (n = 411) were collected from 28 localities from across eastern Australia from 2011 to 2012. Linear models were constructed to examine the relationships between morphology and local environment. Patterns of variation in body mass and bill surface area were consistent with Bergmann's and Allen's rules, respectively (small body size and larger bill size in warmer climates), with maximum summer temperature being a strongly weighted predictor of both variables. In the only intraspecific test of Hesse's rule in birds to date, we found no evidence to support the idea that relative heart size will be larger in individuals which live in colder climates. Our study does provide evidence that maximum temperature is a strong driver of morphological adaptation for starlings in Australia. The changes in morphology presented here demonstrate the potential for avian species to make rapid adaptive changes in relation to a changing climate to ameliorate the effects of heat stress.

  4. Non-malignant pathological results on transthoracic CT guided core-needle biopsy: when is benign really benign?

    PubMed

    Rui, Y; Han, M; Zhou, W; He, Q; Li, H; Li, P; Zhang, F; Shi, Y; Su, X

    2018-06-06

    To determine true negatives and characterise the variables associated with false-negative results when interpreting non-malignant results of computed tomography (CT)-guided lung biopsy. Nine hundred and fifty patients with initial non-malignant findings on their first transthoracic CT-guided core-needle biopsy (TTNB) were included in the study. Initial biopsy results were compared to definitive diagnoses established later. The negative predictive value (NPV) of non-malignant diseases upon initial TTNB was 83.6%. When the biopsy results indicated specific infection or benign tumour (n=225, 26.1%), they all were confirmed true negative for malignancy later. Only one inconclusive "granuloma" diagnosis was false negative. All 141 patients (141/861, 16.4%) who were false negative for malignancy were from the "infection not otherwise specified (NOS)", "inflammatory diseases", or "inconclusive" groups. Age (p=0.002), cancer history (p<0.001), target size (p=0.003), and pneumothorax during lung biopsy (p=0.003) were found to be significant predictors of false-negative results; 47.6% (410/861) of patients underwent additional invasive examinations to reach a final diagnosis. Ultimately, 52.7% (216/410) were successfully diagnosed. Specific infection, benign tumour, and granulomatous inflammation of first TTNBs were mostly true negative. Older age, history of cancer, larger target size, and pneumothorax were highly predictive of false-negative results for malignancies. In such cases, additional invasive examinations were frequently necessary to obtain final diagnoses. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  5. Posterosuperior Placement of a Standard-Sized Cup at the True Acetabulum in Acetabular Reconstruction of Developmental Dysplasia of the Hip With High Dislocation.

    PubMed

    Xu, Jiawei; Xu, Chen; Mao, Yuanqing; Zhang, Jincheng; Li, Huiwu; Zhu, Zhenan

    2016-06-01

    We sought to evaluate posterosuperior placement of the acetabular component at the true acetabulum during acetabular reconstruction in patients with Crowe type-IV developmental dysplasia of the hip. Using pelvic computed tomography and image processing, we developed a two-dimensional mapping technique to demonstrate the distribution of preoperative three-dimensional cup coverage at the true acetabulum, determined the postoperative location of the acetabular cup, and calculated postoperative three-dimensional coverage for 16 Crowe type-IV dysplastic hips in 14 patients with a mean age of 52 years (33-78 years) who underwent total hip arthroplasty. Mean follow-up was 6.3 years (5.5-7.3 years). On preoperative mapping, the maximum three-dimensional coverage using a 44-mm cup was 87.31% (77.36%-98.14%). Mapping enabled the successful replacement of 16 hips using a mean cup size of 44.13 mm (42-46 mm) with posterosuperior placement of the cup. Early weight-bearing and no prosthesis revision or loosening during follow-up were achieved in all patients. The postoperative two-dimensional coverage on anteroposterior radiographs and three-dimensional coverage were 96.15% (89.49%-100%) and 83.42% (71.81%-98.50%), respectively. This technique may improve long-term implant survival in patients with Crowe-IV developmental dysplasia of the hip undergoing total hip arthroplasty by allowing the use of durable bearings, increasing host bone coverage, ensuring initial stability, and restoring the normal hip center. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. CO2-ECBM related coupled physical and mechanical transport processes

    NASA Astrophysics Data System (ADS)

    Gensterblum, Y.; Sartorius, M.; Busch, A.; Krooss, B. M.; Littke, R.

    2012-12-01

    The interrelation of cleat transport processes and mechanical properties was investigated by permeability tests at different stress levels (60% to 130% of in-situ stress) with sorbing (CH4, CO2) and inert gases (N2, Ar, He) on a subbituminous A coal from the Surat Basin, Queensland Australia (figure). From the flow tests under controlled triaxial stress conditions the Klinkenberg-corrected "true" permeability coefficients and the Klinkenberg slip factors were derived. The "true"-, absolute or Klinkenberg-corrected permeability depends on gas type. Following the approach of Seidle et al. (1992) the cleat volume compressibility (cf) was calculated from observed changes in apparent permeability upon variation of external stress (at equal mean gas pressures). The observed effects also show a clear dependence on gas type. Due to pore or cleat compressibility the cleat aperture decreases with increasing effective stress. Vice versa, with increasing mean pore pressure at lower confining pressure an increase in permeability is observed, which is attributed to a widening of cleat aperture. Non-sorbing gases like helium and argon show higher apparent permeabilities than sorbing gases like methane and CO2. Permeability coefficients measured with successively increasing mean gas pressures were consistently lower than those determined at decreasing mean gas pressures. The kinetics of matrix transport processes were studied by sorption tests on different particle sizes at various moisture contents and temperatures (cf. Busch et al., 2006). Methane uptake rates were determined from the pressure decline curves recorded for each particle-size fraction, and "diffusion coefficients" were calculated using several unipore and bidisperse diffusion models. While the CH4 sorption capacity of moisture-equilibrated coals was significantly lower (by 50%) than that of dry coals, no hysteresis was observed between sorption and desorption on dry and moisture-equilibrated samples and the sorption isotherms recorded for different particle sizes were essentially identical. The CH4 uptake rates were lower by a factor of two for moist coals than for dry coals. Busch, A., Gensterblum, Y., Krooss, B.M. and Siemons, N., 2006. Investigation of high-pressure selective adsorption/desorption behaviour of CO2 and CH4 on coals: An experimental study. International Journal of Coal Geology, 66(1-2): 53-68. Seidle, J.P., Jeansonne, M.W. and Erickson, D.J., 1992. Application of Matchstick Geometry to Stress-Dependent Permeability in Coals, SPE Rocky Mountain Regional Meeting, Casper, Wyoming.

  7. The importance of food quantity and quality for reproductive performance in alpine water pipits (Anthus spinoletta).

    PubMed

    Brodmann, Paul A; Reyer, H-U; Bollmann, Kurt; Schläpfer, Alex R; Rauter, Claudia

    1997-01-01

    Studies relating reproduction to food availability are usually restricted to food quantity, but ignore food quality and the effects of habitat structure on obtaining the food. This is particularly true for insectivorous birds. In this study we relate measures of reproductive success, time of reproduction and nestling size of water pipits (Anthus spinoletta) to biomass, taxonomic composition and nutritional content of available food, and to vegetation structure and distance to feeding sites. Clutch size was positively correlated with the proportion of grass at the feeding sites, which facilitates foraging. This suggests that water pipits adapt their clutch size to environmental conditions. Also, pipits started breeding earlier and produced more fledglings when abundant food and a large proportion of grass were available, probably because these conditions allow the birds to gain more energy in less time. The number of fledglings was positively correlated with the energy content of available food. No significant relationships were found between feeding conditions and nestling size or the time that nestlings took to fledge. This suggests that water pipits do not invest more in individual nestlings when food conditions are favourable but rather start breeding earlier and produce more young. Taxonomic composition and nutritional content of prey were not correlated with any of the reproductive parameters, indicating that profitability rather than quality of food affects reproductive success.

  8. Sexual size dimorphism in three North Sea gadoids.

    PubMed

    Keyl, F; Kempf, A J; Sell, A F

    2015-01-01

    Existing biological data on whiting Merlangius merlangus, cod Gadus morhua and haddock Melanogrammus aeglefinus from a long-term international survey were analysed to address sexual size dimorphism (SSD) and its effect on their ecology and management. Results show that SSD, with larger females of the same age as males, is a result of higher growth rates in females. A direct consequence of SSD is the pronounced length-dependent female ratio that was found in all three gadoids in the North Sea. Female ratios of the three species changed from equality to female dominance at specific dominance transition lengths of c. 30, 35 and 60 cm for M. merlangus, G. morhua and M. aeglefinus, respectively. An analysis by area for M. merlangus also revealed length dependence of female ratios. SSD and length-dependent female ratios under most circumstances are inseparable. Higher overall energy demand as well as a higher energy uptake rate must result from the observed SSD and dimorphism in growth rates. Potential processes related to feeding, locomotion and physiology are proposed that could balance the increased energy investment of females. Potential consequences of SSD and length dependency of female ratios are the reduction of the reproductive potential of a stock due to size-selective fishing and biased assessment of the true size of the female spawning stock that could distort decisions in fisheries management. © 2014 The Fisheries Society of the British Isles.

  9. A comparative review of methods for comparing means using partially paired data.

    PubMed

    Guo, Beibei; Yuan, Ying

    2017-06-01

    In medical experiments with the objective of testing the equality of two means, data are often partially paired by design or because of missing data. The partially paired data represent a combination of paired and unpaired observations. In this article, we review and compare nine methods for analyzing partially paired data, including the two-sample t-test, paired t-test, corrected z-test, weighted t-test, pooled t-test, optimal pooled t-test, multiple imputation method, mixed model approach, and the test based on a modified maximum likelihood estimate. We compare the performance of these methods through extensive simulation studies that cover a wide range of scenarios with different effect sizes, sample sizes, and correlations between the paired variables, as well as true underlying distributions. The simulation results suggest that when the sample size is moderate, the test based on the modified maximum likelihood estimator is generally superior to the other approaches when the data is normally distributed and the optimal pooled t-test performs the best when the data is not normally distributed, with well-controlled type I error rates and high statistical power; when the sample size is small, the optimal pooled t-test is to be recommended when both variables have missing data and the paired t-test is to be recommended when only one variable has missing data.

  10. Capturing tensile size-dependency in polymer nanofiber elasticity.

    PubMed

    Yuan, Bo; Wang, Jun; Han, Ray P S

    2015-02-01

    As the name implies, tensile size-dependency refers to the size-dependent response under uniaxial tension. It defers markedly from bending size-dependency in terms of onset and magnitude of the size-dependent response; the former begins earlier but rises to a smaller value than the latter. Experimentally, tensile size-dependent behavior is much harder to capture than its bending counterpart. This is also true in the computational effort; bending size-dependency models are more prevalent and well-developed. Indeed, many have questioned the existence of tensile size-dependency. However, recent experiments seem to support the existence of this phenomenon. Current strain gradient elasticity theories can accurately predict bending size-dependency but are unable to track tensile size-dependency. To rectify this deficiency a higher-order strain gradient elasticity model is constructed by including the second gradient of the strain into the deformation energy. Tensile experiments involving 10 wt% polycaprolactone nanofibers are performed to calibrate and verify our model. The results reveal that for the selected nanofibers, their size-dependency begins when their diameters reduce to 600 nm and below. Further, their characteristic length-scale parameter is found to be 1095.8 nm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Emotionally Negative Pictures Enhance Gist Memory

    PubMed Central

    Bookbinder, S. H.; Brainerd, C. J.

    2016-01-01

    In prior work on how true and false memory are influenced by emotion, valence and arousal have often been conflated. Thus, it is difficult to say which specific effects are due to valence and which are due to arousal. In the present research, we used a picture-memory paradigm that allowed emotional valence to be manipulated with arousal held constant. Negatively-valenced pictures elevated both true and false memory, relative to positive and neutral pictures. Conjoint recognition modeling revealed that negative valence (a) reduced erroneous suppression of true memories and (b) increased the familiarity of the semantic content of both true and false memories. Overall, negative valence impaired the verbatim side of episodic memory but enhanced the gist side, and these effects persisted even after a week-long delay. PMID:27454002

  12. Effects of electromyographic biofeedback on quadriceps strength: a systematic review.

    PubMed

    Lepley, Adam S; Gribble, Phillip A; Pietrosimone, Brian G

    2012-03-01

    Quadriceps strength is a vital component to lower extremity function and is often the focus in resistance training interventions and injury rehabilitation. Electromyographic biofeedback (EMGBF) is frequently used to supplement strength gains; however, the true effect remains unknown. Therefore, the objective of this investigation was to determine the magnitude of the treatment effect for EMGBF on quadriceps strength compared with that of placebo and traditional exercise interventions in both healthy and pathological populations. Web of Science and ProQuest databases were searched, and bibliographies of relevant articles were crossreferenced. Six articles measuring isometric quadriceps strength in response to EMGBF training were included and methodologically assessed using the Physiotherapy Evidence Database (PEDro). Standardized effect sizes with 95% confidence intervals (CIs) were calculated from preintervention and postintervention measures for EMGBF, placebo, and exercise-only interventions. Separate comparisons were made between studies assessing different intervention length (<4 and ≥4 weeks) and patient populations (pathological and healthy). Articles included received an average PEDro score of 6.5 ± 0.84. Homogeneous EMGBF effect sizes were found in all 6 studies (d = 0.01-5.56), with 4 studies reporting CI that crossed 0. A heterogeneous collection of effect sizes was found for exercise alone (d = -0.12 to 1.18) and placebo (d = -0.2 to 1.38), with 4 and 1 studies having a CI that crossed 0, respectively. The greatest EMGBF effects were found in pathological populations (d = 0.01-5.56), with the strongest effect found in the subjects with knee osteoarthritis (d = 5.56, CI = 4.26-6.68). As a group, effects were the strongest for EMGBF compared with that of placebo and exercise-only interventions, yet definitive evidence that EMGBF is beneficial for increasing quadriceps strength could not be concluded because of the 4 studies demonstrating a wide CI.

  13. Methods for estimating 2D cloud size distributions from 1D observations

    DOE PAGES

    Romps, David M.; Vogelmann, Andrew M.

    2017-08-04

    The two-dimensional (2D) size distribution of clouds in the horizontal plane plays a central role in the calculation of cloud cover, cloud radiative forcing, convective entrainment rates, and the likelihood of precipitation. Here, a simple method is proposed for calculating the area-weighted mean cloud size and for approximating the 2D size distribution from the 1D cloud chord lengths measured by aircraft and vertically pointing lidar and radar. This simple method (which is exact for square clouds) compares favorably against the inverse Abel transform (which is exact for circular clouds) in the context of theoretical size distributions. Both methods also performmore » well when used to predict the size distribution of real clouds from a Landsat scene. When applied to a large number of Landsat scenes, the simple method is able to accurately estimate the mean cloud size. Finally, as a demonstration, the methods are applied to aircraft measurements of shallow cumuli during the RACORO campaign, which then allow for an estimate of the true area-weighted mean cloud size.« less

  14. Methods for estimating 2D cloud size distributions from 1D observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romps, David M.; Vogelmann, Andrew M.

    The two-dimensional (2D) size distribution of clouds in the horizontal plane plays a central role in the calculation of cloud cover, cloud radiative forcing, convective entrainment rates, and the likelihood of precipitation. Here, a simple method is proposed for calculating the area-weighted mean cloud size and for approximating the 2D size distribution from the 1D cloud chord lengths measured by aircraft and vertically pointing lidar and radar. This simple method (which is exact for square clouds) compares favorably against the inverse Abel transform (which is exact for circular clouds) in the context of theoretical size distributions. Both methods also performmore » well when used to predict the size distribution of real clouds from a Landsat scene. When applied to a large number of Landsat scenes, the simple method is able to accurately estimate the mean cloud size. Finally, as a demonstration, the methods are applied to aircraft measurements of shallow cumuli during the RACORO campaign, which then allow for an estimate of the true area-weighted mean cloud size.« less

  15. Dynamics of domain coverage of the protein sequence universe

    PubMed Central

    2012-01-01

    Background The currently known protein sequence space consists of millions of sequences in public databases and is rapidly expanding. Assigning sequences to families leads to a better understanding of protein function and the nature of the protein universe. However, a large portion of the current protein space remains unassigned and is referred to as its “dark matter”. Results Here we suggest that true size of “dark matter” is much larger than stated by current definitions. We propose an approach to reducing the size of “dark matter” by identifying and subtracting regions in protein sequences that are not likely to contain any domain. Conclusions Recent improvements in computational domain modeling result in a decrease, albeit slowly, in the relative size of “dark matter”; however, its absolute size increases substantially with the growth of sequence data. PMID:23157439

  16. Correlation Between Measured Noise And Its Visual Perception.

    NASA Astrophysics Data System (ADS)

    Bollen, Romain

    1986-06-01

    For obvious reasons people in the field claim that measured data do not agree with what they perceive. Scientists reply by saying that their data are "true". Are they? Since images are made to be looked at, a request for data meaningful for what is perceived, is not foolish. We show that, when noise is characterized by standard density fluctuation figures, a good correlation with noise perception by the naked eye on a large size radiograph is obtained in applying microdensitometric scanning with a 400 micron aperture. For other viewing conditions the aperture size has to be adapted.

  17. Investigation of eddy current examination on OD fatigue crack for steam generator tubes

    NASA Astrophysics Data System (ADS)

    Kong, Yuying; Ding, Boyuan; Li, Ming; Liu, Jinhong; Chen, Huaidong; Meyendorf, Norbert G.

    2015-03-01

    The opening width of fatigue crack was very small, and conventional Bobbin probe was very difficult to detect it in steam generator tubes. Different sizes of 8 fatigue cracks were inspected using bobbin probe rotating probe. The analysis results showed that, bobbin probe was not sensitive for fatigue crack even for small through wall crack mixed with denting signal. On the other hand, the rotating probe was easily to detect all cracks. Finally, the OD phase to depth curve for fatigue crack using rotating probe was established and the results agreed very well with the true crack size.

  18. Cavitation effect of holmium laser pulse applied to ablation of hard tissue underwater.

    PubMed

    Lü, Tao; Xiao, Qing; Xia, Danqing; Ruan, Kai; Li, Zhengjia

    2010-01-01

    To overcome the inconsecutive drawback of shadow and schlieren photography, the complete dynamics of cavitation bubble oscillation or ablation products induced by a single holmium laser pulse [2.12 microm, 300 micros (FWHM)] transmitted in different core diameter (200, 400, and 600 microm) fibers is recorded by means of high-speed photography. Consecutive images from high-speed cameras can stand for the true and complete process of laser-water or laser-tissue interaction. Both laser pulse energy and fiber diameter determine cavitation bubble size, which further determines acoustic transient amplitudes. Based on the pictures taken by high-speed camera and scanned by an optical coherent microscopy (OCM) system, it is easily seen that the liquid layer at the distal end of the fiber plays an important role during the process of laser-tissue interaction, which can increase ablation efficiency, decrease heat side effects, and reduce cost.

  19. Two Distinct Modes in One-Day Rainfall Event during MC3E Field Campaign: Analyses of Disdrometer Observations and WRF-SBM Simulation

    NASA Technical Reports Server (NTRS)

    Iguchi, Takamichi; Matsui, Toshihisa; Tokay, Ali; Kollias, Pavlos; Tao, Wei-Kuo

    2012-01-01

    A unique microphysical structure of rainfall is observed by the surface laser optical Particle Size and Velocity (Parsivel) disdrometers on 25 April 2011 during Midlatitude Continental Convective Clouds Experiment (MC3E). According to the systematic differences in rainfall rate and bulk effective droplet radius, the sampling data can be divided into two groups; the rainfall mostly from the deep convective clouds has relatively high rainfall rate and large bulk effective droplet radius, whereas the reverse is true for the rainfall from the shallow wrm clouds. The Weather Research and Forecasting model coupled with spectral bin microphysics (WRF-SBM) successfully reproduces the two distinct modes in the observed rainfall microphysical structure. The results show that the up-to-date model can demonstrate how the cloud physics and the weather condition on the day are involved in forming the unique rainfall characteristic.

  20. Two distinct modes in one-day rainfall event during MC3E field campaign: Analyses of disdrometer observations and WRF-SBM simulation

    NASA Astrophysics Data System (ADS)

    Iguchi, Takamichi; Matsui, Toshihisa; Tokay, Ali; Kollias, Pavlos; Tao, Wei-Kuo

    2012-12-01

    A unique microphysical structure of rainfall is observed by the surface laser optical Particle Size and Velocity (Parsivel) disdrometers on 25 April 2011 during Midlatitude Continental Convective Clouds Experiment (MC3E). According to the systematic differences in rainfall rate and bulk effective droplet radius, the sampling data can be divided into two groups; the rainfall mostly from the deep convective clouds has relatively high rainfall rate and large bulk effective droplet radius, whereas the reverse is true for the rainfall from the shallow warm clouds. The Weather Research and Forecasting model coupled with spectral bin microphysics (WRF-SBM) successfully reproduces the two distinct modes in the observed rainfall microphysical structure. The results show that the up-to-date model can demonstrate how the cloud physics and the weather condition on the day are involved in forming the unique rainfall characteristic.

  1. The SIMS Screen for feigned mental disorders: the development of detection-based scales.

    PubMed

    Rogers, Richard; Robinson, Emily V; Gillard, Nathan D

    2014-01-01

    Time-efficient screens for feigned mental disorders (FMDs) constitute important tools in forensic assessments. The Structured Inventory of Malingered Symptomatology (SIMS) is a 75-item true-false questionnaire that has been extensively studied as an FMD screen. However, the SIMS scales are not based on established detection strategies, and only its total score is utilized as a feigning screen. This investigation develops two new feigning scales based on well-established detection-strategies: rare symptoms (RS) and symptom combinations (SC). They are studied in a between-subjects simulation design using inpatients with partial-malingering (i.e., patients with genuine disorders asked to feign greater disabilities) conditions. Subject to future cross-validation, the SC scale evidenced the highest effect size (d=2.01) and appeared the most effective at ruling out examinees, who have a high likelihood of genuine responding. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Particle sedimentation in curved tubes: A 3D simulation and optimization for treatment of vestibular vertigo

    NASA Astrophysics Data System (ADS)

    White, Brian; Squires, Todd M.; Hain, Timothy C.; Stone, Howard A.

    2003-11-01

    Benign paroxysmal positional vertigo (BPPV) is a mechanical disorder of the vestibular system where micron-size crystals abnormally drift into the semicircular canals of the inner ear that sense angular motion of the head. Sedimentation of these crystals causes sensation of motion after true head motion has stopped: vertigo results. The usual clinical treatment is through a series of head maneuvers designed to move the particles into a less sensitive region of the canal system. We present a three-dimensional model to simulate treatment of BPPV by determining the complete hydrodynamic motion of the particles through the course of a therapeutic maneuver while using a realistic representation of the actual geometry. Analyses of clinical maneuvers show the parameter range for which they are effective, and indicate inefficiencies in current practice. In addition, an optimization process determines the most effective head maneuver, which significantly differs from those currently in practice.

  3. Radar-based rainfall estimation: Improving Z/R relations through comparison of drop size distributions, rainfall rates and radar reflectivity patterns

    NASA Astrophysics Data System (ADS)

    Neuper, Malte; Ehret, Uwe

    2014-05-01

    The relation between the measured radar reflectivity factor Z and surface rainfall intensity R - the Z/R relation - is profoundly complex, so that in general one speaks about radar-based quantitative precipitation estimation (QPE) rather than exact measurement. Like in Plato's Allegory of the Cave, what we observe in the end is only the 'shadow' of the true rainfall field through a very small backscatter of an electromagnetic signal emitted by the radar, which we hope has been actually reflected by hydrometeors. The meteorological relevant and valuable Information is gained only indirectly by more or less justified assumptions. One of these assumptions concerns the drop size distribution, through which the rain intensity is finally associated with the measured radar reflectivity factor Z. The real drop size distribution is however subject to large spatial and temporal variability, and consequently so is the true Z/R relation. Better knowledge of the true spatio-temporal Z/R structure therefore has the potential to improve radar-based QPE compared to the common practice of applying a single or a few standard Z/R relations. To this end, we use observations from six laser-optic disdrometers, two vertically pointing micro rain radars, 205 rain gauges, one rawindsonde station and two C-band Doppler radars installed or operated in and near the Attert catchment (Luxembourg). The C-band radars and the rawindsonde station are operated by the Belgian and German Weather Services, the rain gauge data was partly provided by the French, Dutch, Belgian, German Weather Services and the Ministry of Agriculture of Luxembourg and the other equipment was installed as part of the interdisciplinary DFG research project CAOS (Catchment as Organized Systems). With the various data sets correlation analyzes were executed. In order to get a notion on the different appearance of the reflectivity patterns in the radar image, first of all various simple distribution indices (for example the Gini index, Rosenbluth index) were calculated and compared to the synoptic situation in general and the atmospheric stability in special. The indices were then related to the drop size distributions and the rain rate. Special emphasis was laid in an objective distinction between stratiform and convective precipitation and hereby altered droplet size distribution, respectively Z/R relationship. In our presentation we will show how convective and stratiform precipitation becomes manifest in the different distribution indices, which in turn are thought to represent different patterns in the radar image. We also present and discuss the correlation between these distribution indices and the evolution of the drop size distribution and the rain rate and compare a dynamically adopted Z/R relation to the standard Marshall-Palmer Z/R relation.

  4. Identification of EGFLAM, SPATC1L and RNASE13 as novel susceptibility loci for aortic aneurysm in Japanese individuals by exome-wide association studies

    PubMed Central

    Yamada, Yoshiji; Sakuma, Jun; Takeuchi, Ichiro; Yasukochi, Yoshiki; Kato, Kimihiko; Oguri, Mitsutoshi; Fujimaki, Tetsuo; Horibe, Hideki; Muramatsu, Masaaki; Sawabe, Motoji; Fujiwara, Yoshinori; Taniguchi, Yu; Obuchi, Shuichi; Kawai, Hisashi; Shinkai, Shoji; Mori, Seijiro; Arai, Tomio; Tanaka, Masashi

    2017-01-01

    We performed an exome-wide association study (EWAS) to identify genetic variants - in particular, low-frequency or rare variants with a moderate to large effect size - that confer susceptibility to aortic aneurysm with 8,782 Japanese subjects (456 patients with aortic aneurysm, 8,326 control individuals) and with the use of Illumina HumanExome-12 DNA Analysis BeadChip or Infinium Exome-24 BeadChip arrays. The correlation of allele frequencies for 41,432 single nucleotide polymorphisms (SNPs) that passed quality control to aortic aneurysm was examined with Fisher's exact test. Based on Bonferroni's correction, a P-value of <1.21×10−6 was considered statistically significant. The EWAS revealed 59 SNPs that were significantly associated with aortic aneurysm. None of these SNPs was significantly (P<2.12×10−4) associated with aortic aneurysm by multivariable logistic regression analysis with adjustment for age, gender and hypertension, although 8 SNPs were related (P<0.05) to this condition. Examination of the correlation of these latter 8 SNPs to true or dissecting aortic aneurysm separately showed that rs1465567 [T/C (W229R)] of the EGF-like, fibronectin type III, and laminin G domains gene (EGFLAM) (dominant model; P=0.0014; odds ratio, 1.63) was significantly (P<0.0016) associated with true aortic aneurysm. We next performed EWASs for true or dissecting aortic aneurysm separately and found that 45 and 19 SNPs were significantly associated with these conditions, respectively. Multivariable logistic regression analysis with adjustment for covariates revealed that rs113710653 [C/T (E231K)] of the spermatogenesis- and centriole associated 1-like gene (SPATC1L) (dominant model; P=0.0002; odds ratio, 5.32) and rs143881017 [C/T (R140H)] of the ribonuclease A family member 13 gene (RNASE13) (dominant model; P=0.0006; odds ratio, 5.77) were significantly (P<2.78×10−4 or P<6.58×10−4, respectively) associated with true or dissecting aortic aneurysm, respectively. EGFLAM and SPATC1L may thus be susceptibility loci for true aortic aneurysm and RNASE13 may be such a locus for dissecting aneurysm in Japanese individuals. PMID:28339009

  5. The Role of Binarity in the Angular Momentum Evolution of M Dwarfs

    NASA Astrophysics Data System (ADS)

    Stauffer, John; Rebull, Luisa; K2 clusters team

    2018-01-01

    We have analysed K2 light curves for of order a thousand low mass stars in each of the 8 Myr old Upper Sco association, the 125 Myr age Pleiades open cluster and the ~700 Myr old Praesepe cluster. A very large fraction of these stars show well-determined rotation periods with K2, and where the star is a binary, we usually are able to determine periods for both stars. In Upper Sco, where there are ~150 M dwarf binaries with K2 light curves, the binary stars have periods that are much shorter on average and much closer to each other than would be true if drawn at random from the Upper Sco M dwarf single stars. The same is true in the Pleiades,though the size of the differences from the single M dwarf population is smaller. By Praesepe age, the M dwarf binaries are still somewhat rapidly rotating but their period differences are not significantly different from what would be true if drawn by chance from the singles.

  6. Embolization of a True Giant Splenic Artery Aneurysm Using NBCA Glue - Case Report and Literature Review.

    PubMed

    Guziński, Maciej; Kurcz, Jacek; Kukulska, Monika; Neska, Małgorzata; Garcarek, Jerzy

    2015-01-01

    Although splenic artery aneurysms (SAAs) are common, their giant forms (more than 10 cm in diameter) are rare. Because of the variety of forms and locations of these aneurysms, there are a lot of therapeutic methods to choose. In our case of a giant true aneurysm we performed an endovascular embolization with N-butyl-cyano-acrylate (NBCA) glue. To our knowledge it is the first reported case of this method of treatment of true giant SAA. A 74-year-old male patient with symptomatic giant SAA (13 cm) was urgently admitted to our hospital for the diagnostic and therapeutic procedures. Due to the general health condition, advanced age and the large size of the aneurysm we decided to perform an endovascular treatment with N-butyl-cyano-acrylate (NBCA) glue. The preaneurysmal part of splenic artery was occluded completely with exclusion of the aneurysm. No splenectomy was needed. The patient was discharged in good general condition Embolization with NBCA can be an efficient method to treat the giant SAA.

  7. Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT

    NASA Astrophysics Data System (ADS)

    Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang

    2015-03-01

    In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.

  8. An evaluation of sex-age-kill (SAK) model performance

    USGS Publications Warehouse

    Millspaugh, Joshua J.; Skalski, John R.; Townsend, Richard L.; Diefenbach, Duane R.; Boyce, Mark S.; Hansen, Lonnie P.; Kammermeyer, Kent

    2009-01-01

    The sex-age-kill (SAK) model is widely used to estimate abundance of harvested large mammals, including white-tailed deer (Odocoileus virginianus). Despite a long history of use, few formal evaluations of SAK performance exist. We investigated how violations of the stable age distribution and stationary population assumption, changes to male or female harvest, stochastic effects (i.e., random fluctuations in recruitment and survival), and sampling efforts influenced SAK estimation. When the simulated population had a stable age distribution and λ > 1, the SAK model underestimated abundance. Conversely, when λ < 1, the SAK overestimated abundance. When changes to male harvest were introduced, SAK estimates were opposite the true population trend. In contrast, SAK estimates were robust to changes in female harvest rates. Stochastic effects caused SAK estimates to fluctuate about their equilibrium abundance, but the effect dampened as the size of the surveyed population increased. When we considered both stochastic effects and sampling error at a deer management unit scale the resultant abundance estimates were within ±121.9% of the true population level 95% of the time. These combined results demonstrate extreme sensitivity to model violations and scale of analysis. Without changes to model formulation, the SAK model will be biased when λ ≠ 1. Furthermore, any factor that alters the male harvest rate, such as changes to regulations or changes in hunter attitudes, will bias population estimates. Sex-age-kill estimates may be precise at large spatial scales, such as the state level, but less so at the individual management unit level. Alternative models, such as statistical age-at-harvest models, which require similar data types, might allow for more robust, broad-scale demographic assessments.

  9. A True-False Test on Methods of Typewriting Instruction.

    ERIC Educational Resources Information Center

    West, Leonard J.

    1984-01-01

    Presents a true-false test on typewriting instruction to illustrate the effects of educational lag, publishing practices, and deficiencies in preservice and inservice teacher education upon teaching methods. (SK)

  10. Life Adaptation Skills Training (LAST) for persons with depression: A randomized controlled study.

    PubMed

    Chen, Yun-Ling; Pan, Ay-Woan; Hsiung, Ping-Chuan; Chung, Lyinn; Lai, Jin-Shei; Shur-Fen Gau, Susan; Chen, Tsyr-Jang

    2015-10-01

    To investigate the efficacy of the "Life Adaptation Skills Training (LAST)" program for persons with depression. Sixty-eight subjects with depressive disorder were recruited from psychiatric outpatient clinics in Taipei city and were randomly assigned to either an intervention group (N=33), or a control group (N=35). The intervention group received 24-sessions of the LAST program, as well as phone contact mainly related to support for a total of 24 times. The control group only received phone contact 24 times. The primary outcome measure utilized was the World Health Organization Quality of Life-BREF-Taiwan version. Secondary outcome measures included the Occupational self-assessment, the Mastery scale, the Social support questionnaire, the Beck anxiety inventory, the Beck depression inventory-II, and the Beck scale for suicide ideation. The mixed-effects linear model was applied to analyze the incremental efficacy of the LAST program, and the partial eta squared (ηp(2)) was used to examine the within- and between- group effect size. The subjects who participated in the LAST program showed significant incremental improvements with moderate to large between-group effect sizes on their level of anxiety (-5.45±2.34, p<0.05; ηp(2)=0.083) and level of suicidal ideation (-3.09±1.11, p<0.01; ηp(2)=0.157) when compared to the control group. The reduction of suicidal ideations had a maintenance effect for three months after the end of intervention (-3.44±1.09, p<0.01), with moderate between-group effect sizes (ηp(2)=0.101). Both groups showed significant improvement on overall QOL, overall health, physical QOL, psychological QOL, level of anxiety, and level of depression. The within-group effect sizes achieved large effects in the intervention group (ηp(2)=0.328-0.544), and were larger than that of the control group. A small sample size in the study, a high dropout rate, lower compliance rates for the intervention group, and lacking of true control group. The occupation-based LAST program, which focuses on lifestyle rearrangement and coping skills enhancement, could significantly improve the level of anxiety and suicidal ideations for persons with depression. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Evaluation of an ensemble of genetic models for prediction of a quantitative trait.

    PubMed

    Milton, Jacqueline N; Steinberg, Martin H; Sebastiani, Paola

    2014-01-01

    Many genetic markers have been shown to be associated with common quantitative traits in genome-wide association studies. Typically these associated genetic markers have small to modest effect sizes and individually they explain only a small amount of the variability of the phenotype. In order to build a genetic prediction model without fitting a multiple linear regression model with possibly hundreds of genetic markers as predictors, researchers often summarize the joint effect of risk alleles into a genetic score that is used as a covariate in the genetic prediction model. However, the prediction accuracy can be highly variable and selecting the optimal number of markers to be included in the genetic score is challenging. In this manuscript we present a strategy to build an ensemble of genetic prediction models from data and we show that the ensemble-based method makes the challenge of choosing the number of genetic markers more amenable. Using simulated data with varying heritability and number of genetic markers, we compare the predictive accuracy and inclusion of true positive and false positive markers of a single genetic prediction model and our proposed ensemble method. The results show that the ensemble of genetic models tends to include a larger number of genetic variants than a single genetic model and it is more likely to include all of the true genetic markers. This increased sensitivity is obtained at the price of a lower specificity that appears to minimally affect the predictive accuracy of the ensemble.

  12. New approach to calculate the true-coincidence effect of HpGe detector

    NASA Astrophysics Data System (ADS)

    Alnour, I. A.; Wagiran, H.; Ibrahim, N.; Hamzah, S.; Siong, W. B.; Elias, M. S.

    2016-01-01

    The corrections for true-coincidence effects in HpGe detector are important, especially at low source-to-detector distances. This work established an approach to calculate the true-coincidence effects experimentally for HpGe detectors of type Canberra GC3018 and Ortec GEM25-76-XLB-C, which are in operation at neutron activation analysis lab in Malaysian Nuclear Agency (NM). The correction for true-coincidence effects was performed close to detector at distances 2 and 5 cm using 57Co, 60Co, 133Ba and 137Cs as standard point sources. The correction factors were ranged between 0.93-1.10 at 2 cm and 0.97-1.00 at 5 cm for Canberra HpGe detector; whereas for Ortec HpGe detector ranged between 0.92-1.13 and 0.95-100 at 2 and 5 cm respectively. The change in efficiency calibration curve of the detector at 2 and 5 cm after correction was found to be less than 1%. Moreover, the polynomial parameters functions were simulated through a computer program, MATLAB in order to find an accurate fit to the experimental data points.

  13. Personality Trait Differences Between Young and Middle-Aged Adults: Measurement Artifacts or Actual Trends?

    PubMed

    Nye, Christopher D; Allemand, Mathias; Gosling, Samuel D; Potter, Jeff; Roberts, Brent W

    2016-08-01

    A growing body of research demonstrates that older individuals tend to score differently on personality measures than younger adults. However, recent research using item response theory (IRT) has questioned these findings, suggesting that apparent age differences in personality traits merely reflect artifacts of the response process rather than true differences in the latent constructs. Conversely, other studies have found the opposite-age differences appear to be true differences rather than response artifacts. Given these contradictory findings, the goal of the present study was to examine the measurement equivalence of personality ratings drawn from large groups of young and middle-aged adults (a) to examine whether age differences in personality traits could be completely explained by measurement nonequivalence and (b) to illustrate the comparability of IRT and confirmatory factor analysis approaches to testing equivalence in this context. Self-ratings of personality traits were analyzed in two groups of Internet respondents aged 20 and 50 (n = 15,726 in each age group). Measurement nonequivalence across these groups was negligible. The effect sizes of the mean differences due to nonequivalence ranged from -.16 to .15. Results indicate that personality trait differences across age groups reflect actual differences rather than merely response artifacts. © 2015 Wiley Periodicals, Inc.

  14. Particle dynamics and deposition in true-scale pulmonary acinar models.

    PubMed

    Fishler, Rami; Hofemeier, Philipp; Etzion, Yael; Dubowski, Yael; Sznitman, Josué

    2015-09-11

    Particle transport phenomena in the deep alveolated airways of the lungs (i.e. pulmonary acinus) govern deposition outcomes following inhalation of hazardous or pharmaceutical aerosols. Yet, there is still a dearth of experimental tools for resolving acinar particle dynamics and validating numerical simulations. Here, we present a true-scale experimental model of acinar structures consisting of bifurcating alveolated ducts that capture breathing-like wall motion and ensuing respiratory acinar flows. We study experimentally captured trajectories of inhaled polydispersed smoke particles (0.2 to 1 μm in diameter), demonstrating how intrinsic particle motion, i.e. gravity and diffusion, is crucial in determining dispersion and deposition of aerosols through a streamline crossing mechanism, a phenomenon paramount during flow reversal and locally within alveolar cavities. A simple conceptual framework is constructed for predicting the fate of inhaled particles near an alveolus by identifying capture and escape zones and considering how streamline crossing may shift particles between them. In addition, we examine the effect of particle size on detailed deposition patterns of monodispersed microspheres between 0.1-2 μm. Our experiments underline local modifications in the deposition patterns due to gravity for particles ≥0.5 μm compared to smaller particles, and show good agreement with corresponding numerical simulations.

  15. Particle dynamics and deposition in true-scale pulmonary acinar models

    PubMed Central

    Fishler, Rami; Hofemeier, Philipp; Etzion, Yael; Dubowski, Yael; Sznitman, Josué

    2015-01-01

    Particle transport phenomena in the deep alveolated airways of the lungs (i.e. pulmonary acinus) govern deposition outcomes following inhalation of hazardous or pharmaceutical aerosols. Yet, there is still a dearth of experimental tools for resolving acinar particle dynamics and validating numerical simulations. Here, we present a true-scale experimental model of acinar structures consisting of bifurcating alveolated ducts that capture breathing-like wall motion and ensuing respiratory acinar flows. We study experimentally captured trajectories of inhaled polydispersed smoke particles (0.2 to 1 μm in diameter), demonstrating how intrinsic particle motion, i.e. gravity and diffusion, is crucial in determining dispersion and deposition of aerosols through a streamline crossing mechanism, a phenomenon paramount during flow reversal and locally within alveolar cavities. A simple conceptual framework is constructed for predicting the fate of inhaled particles near an alveolus by identifying capture and escape zones and considering how streamline crossing may shift particles between them. In addition, we examine the effect of particle size on detailed deposition patterns of monodispersed microspheres between 0.1–2 μm. Our experiments underline local modifications in the deposition patterns due to gravity for particles ≥0.5 μm compared to smaller particles, and show good agreement with corresponding numerical simulations. PMID:26358580

  16. Modification of the Mantel-Haenszel and Logistic Regression DIF Procedures to Incorporate the SIBTEST Regression Correction

    ERIC Educational Resources Information Center

    DeMars, Christine E.

    2009-01-01

    The Mantel-Haenszel (MH) and logistic regression (LR) differential item functioning (DIF) procedures have inflated Type I error rates when there are large mean group differences, short tests, and large sample sizes.When there are large group differences in mean score, groups matched on the observed number-correct score differ on true score,…

  17. Estimating true instead of apparent survival using spatial Cormack-Jolly-Seber models

    USGS Publications Warehouse

    Schaub, Michael; Royle, J. Andrew

    2014-01-01

    Spatial CJS models enable study of dispersal and survival independent of study design constraints such as imperfect detection and size of the study area provided that some of the dispersing individuals remain in the study area. We discuss possible extensions of our model: alternative dispersal models and the inclusion of covariates and of a habitat suitability map.

  18. Evaluation of PeneloPET Simulations of Biograph PET/CT Scanners

    NASA Astrophysics Data System (ADS)

    Abushab, K. M.; Herraiz, J. L.; Vicente, E.; Cal-González, J.; España, S.; Vaquero, J. J.; Jakoby, B. W.; Udías, J. M.

    2016-06-01

    Monte Carlo (MC) simulations are widely used in positron emission tomography (PET) for optimizing detector design, acquisition protocols, and evaluating corrections and reconstruction methods. PeneloPET is a MC code based on PENELOPE, for PET simulations which considers detector geometry, acquisition electronics and materials, and source definitions. While PeneloPET has been successfully employed and validated with small animal PET scanners, it required a proper validation with clinical PET scanners including time-of-flight (TOF) information. For this purpose, we chose the family of Biograph PET/CT scanners: the Biograph True-Point (B-TP), Biograph True-Point with TrueV (B-TPTV) and the Biograph mCT. They have similar block detectors and electronics, but a different number of rings and configuration. Some effective parameters of the simulations, such as the dead-time and the size of the reflectors in the detectors, were adjusted to reproduce the sensitivity and noise equivalent count (NEC) rate of the B-TPTV scanner. These parameters were then used to make predictions of experimental results such as sensitivity, NEC rate, spatial resolution, and scatter fraction (SF), from all the Biograph scanners and some variations of them (energy windows and additional rings of detectors). Predictions agree with the measured values for the three scanners, within 7% (sensitivity and NEC rate) and 5% (SF). The resolution obtained for the B-TPTV is slightly better (10%) than the experimental values. In conclusion, we have shown that PeneloPET is suitable for simulating and investigating clinical systems with good accuracy and short computational time, though some effort tuning of a few parameters of the scanners modeled may be needed in case that the full details of the scanners studied are not available.

  19. ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL

    PubMed Central

    Huang, Jian; Sun, Tingni; Ying, Zhiliang; Yu, Yi; Zhang, Cun-Hui

    2013-01-01

    We study the absolute penalized maximum partial likelihood estimator in sparse, high-dimensional Cox proportional hazards regression models where the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true regression coefficients. Similar results based on an extension of the restricted eigenvalue can be also proved by our method. However, the presented oracle inequalities are sharper since the compatibility and cone invertibility factors are always greater than the corresponding restricted eigenvalue. In the Cox regression model, the Hessian matrix is based on time-dependent covariates in censored risk sets, so that the compatibility and cone invertibility factors, and the restricted eigenvalue as well, are random variables even when they are evaluated for the Hessian at the true regression coefficients. Under mild conditions, we prove that these quantities are bounded from below by positive constants for time-dependent covariates, including cases where the number of covariates is of greater order than the sample size. Consequently, the compatibility and cone invertibility factors can be treated as positive constants in our oracle inequalities. PMID:24086091

  20. ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL.

    PubMed

    Huang, Jian; Sun, Tingni; Ying, Zhiliang; Yu, Yi; Zhang, Cun-Hui

    2013-06-01

    We study the absolute penalized maximum partial likelihood estimator in sparse, high-dimensional Cox proportional hazards regression models where the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true regression coefficients. Similar results based on an extension of the restricted eigenvalue can be also proved by our method. However, the presented oracle inequalities are sharper since the compatibility and cone invertibility factors are always greater than the corresponding restricted eigenvalue. In the Cox regression model, the Hessian matrix is based on time-dependent covariates in censored risk sets, so that the compatibility and cone invertibility factors, and the restricted eigenvalue as well, are random variables even when they are evaluated for the Hessian at the true regression coefficients. Under mild conditions, we prove that these quantities are bounded from below by positive constants for time-dependent covariates, including cases where the number of covariates is of greater order than the sample size. Consequently, the compatibility and cone invertibility factors can be treated as positive constants in our oracle inequalities.

  1. Evolution of the structure and mechanical properties of sheets of the Al-4.7Mg-0.32Mn-0.21Sc-0.09Zr alloy due to deformation accumulated upon rolling

    NASA Astrophysics Data System (ADS)

    Zolotorevskiy, V. S.; Dobrojinskaja, R. I.; Cheverikin, V. V.; Khamnagdaeva, E. A.; Pozdniakov, A. V.; Levchenko, V. S.; Besogonova, E. S.

    2016-11-01

    The mechanical properties and microstructure of sheets of an Al-4.7Mg-0.32Mn-0.21Sc-0.09Zr alloy deformed and annealed after rolling have been investigated. The total accumulated true strain was ɛf = 3.33-5.63, and the true strain at room temperature and at 200 °C was ɛc = 0.25-2.3. The strength properties of the sheets (yield stress σ0.2 = 495 MPa and ultimate tensile strength σu = 525 MPa) in the deformed state were greater than those after equal-channel angular pressing (ECAP) deformation. The mechanical properties of the deformed sheets after annealing depended on the size of subgrains inside the deformed grains bands with high-angle grain boundaries (HABs). With the increase in the annealing temperature from 150 to 300°C, the subgrain size increased from 80 to 300 nm. The relative elongation δ in the as-cast state and after annealing at 200-250°C (δ = 40-50%) was higher than that after annealing at 300-370°C (δ = 24-29%).

  2. Effects of hydrocortisone on false memory recognition in healthy men and women.

    PubMed

    Duesenberg, Moritz; Weber, Juliane; Schaeuffele, Carmen; Fleischer, Juliane; Hellmann-Regen, Julian; Roepke, Stefan; Moritz, Steffen; Otte, Christian; Wingenfeld, Katja

    2016-12-01

    Most of the studies focusing on the effect of stress on false memories by using psychosocial and physiological stressors yielded diverse results. In the present study, we systematically tested the effect of exogenous hydrocortisone using a false memory paradigm. In this placebo-controlled study, 37 healthy men and 38 healthy women (mean age 24.59 years) received either 10 mg of hydrocortisone or placebo 75 min before using the false memory, that is, Deese-Roediger-McDermott (DRM), paradigm. We used emotionally charged and neutral DRM-based word lists to look for false recognition rates in comparison to true recognition rates. Overall, we expected an increase in false memory after hydrocortisone compared to placebo. No differences between the cortisol and the placebo group were revealed for false and for true recognition performance. In general, false recognition rates were lower compared to true recognition rates. Furthermore, we found a valence effect (neutral, positive, negative, disgust word stimuli), indicating higher rates of true and false recognition for emotional compared to neutral words. We further found an interaction effect between sex and recognition. Post hoc t tests showed that for true recognition women showed a significantly better memory performance than men, independent of treatment. This study does not support the hypothesis that cortisol decreases the ability to distinguish between old versus novel words in young healthy individuals. However, sex and emotional valence of word stimuli appear to be important moderators. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. The species-area relationship, self-similarity, and the true meaning of the z-value.

    PubMed

    Tjørve, Even; Tjørve, Kathleen M Calf

    2008-12-01

    The power model, S= cA(z) (where S is number of species, A is area, and c and z are fitted constants), is the model most commonly fitted to species-area data assessing species diversity. We use the self-similarity properties of this model to reveal patterns implicated by the z parameter. We present the basic arithmetic leading both to the fraction of new species added when two areas are combined and to species overlap between two areas of the same size, given a continuous sampling scheme. The fraction of new species resulting from expansion of an area can be expressed as alpha(z)-1, where alpha is the expansion factor. Consequently, z-values can be converted to a scale-invariant species overlap between two equally sized areas, since the proportion of species in common between the two areas is 2-2(z). Calculating overlap when adding areas of the same size reveals the intrinsic effect of distance assumed by the bisectional scheme. We use overlap area relationships from empirical data sets to illustrate how answers to the single large or several small reserves (SLOSS) question vary between data sets and with scale. We conclude that species overlap and the effect of distance between sample areas or isolates should be addressed when discussing species area relationships, and lack of fit to the power model can be caused by its assumption of a scale-invariant overlap relationship.

  4. Emotionally negative pictures enhance gist memory.

    PubMed

    Bookbinder, S H; Brainerd, C J

    2017-02-01

    In prior work on how true and false memory are influenced by emotion, valence and arousal have often been conflated. Thus, it is difficult to say which specific effects are caused by valence and which are caused by arousal. In the present research, we used a picture-memory paradigm that allowed emotional valence to be manipulated with arousal held constant. Negatively valenced pictures elevated both true and false memory, relative to positive and neutral pictures. Conjoint recognition modeling revealed that negative valence (a) reduced erroneous suppression of true memories and (b) increased the familiarity of the semantic content of both true and false memories. Overall, negative valence impaired the verbatim side of episodic memory but enhanced the gist side, and these effects persisted even after a week-long delay. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. The status of penile enhancement procedures.

    PubMed

    Vardi, Yoram; Gruenwald, Ilan

    2009-11-01

    Most men who request surgical penile enhancement have a normal-sized and fully functional penis but visualize their penises as small (psychological dysmorphism). This fact by itself leads to controversy regarding the true indications for penile enhancement procedures in men without micropenis. One of the typical aspects of penile enhancement is the lack of true methodological evaluation of the more commonly performed procedures. Even recently, only few solid scientific studies are available which can shed some light on results and outcome of these controversial procedures. Although some additional data has emerged during the past year, there is still no consensus in regard to indications and surgical techniques used for penile augmentation or penile girth enhancement. There is further need for more studies to provide a better overview of the value and worthiness of these procedures.

  6. Physical and clinical performance of the mCT time-of-flight PET/CT scanner.

    PubMed

    Jakoby, B W; Bercier, Y; Conti, M; Casey, M E; Bendriem, B; Townsend, D W

    2011-04-21

    Time-of-flight (TOF) measurement capability promises to improve PET image quality. We characterized the physical and clinical PET performance of the first Biograph mCT TOF PET/CT scanner (Siemens Medical Solutions USA, Inc.) in comparison with its predecessor, the Biograph TruePoint TrueV. In particular, we defined the improvements with TOF. The physical performance was evaluated according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standard with additional measurements to specifically address the TOF capability. Patient data were analyzed to obtain the clinical performance of the scanner. As expected for the same size crystal detectors, a similar spatial resolution was measured on the mCT as on the TruePoint TrueV. The mCT demonstrated modestly higher sensitivity (increase by 19.7 ± 2.8%) and peak noise equivalent count rate (NECR) (increase by 15.5 ± 5.7%) with similar scatter fractions. The energy, time and spatial resolutions for a varying single count rate of up to 55 Mcps resulted in 11.5 ± 0.2% (FWHM), 527.5 ± 4.9 ps (FWHM) and 4.1 ± 0.0 mm (FWHM), respectively. With the addition of TOF, the mCT also produced substantially higher image contrast recovery and signal-to-noise ratios in a clinically-relevant phantom geometry. The benefits of TOF were clearly demonstrated in representative patient images.

  7. Physical and clinical performance of the mCT time-of-flight PET/CT scanner

    NASA Astrophysics Data System (ADS)

    Jakoby, B. W.; Bercier, Y.; Conti, M.; Casey, M. E.; Bendriem, B.; Townsend, D. W.

    2011-04-01

    Time-of-flight (TOF) measurement capability promises to improve PET image quality. We characterized the physical and clinical PET performance of the first Biograph mCT TOF PET/CT scanner (Siemens Medical Solutions USA, Inc.) in comparison with its predecessor, the Biograph TruePoint TrueV. In particular, we defined the improvements with TOF. The physical performance was evaluated according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standard with additional measurements to specifically address the TOF capability. Patient data were analyzed to obtain the clinical performance of the scanner. As expected for the same size crystal detectors, a similar spatial resolution was measured on the mCT as on the TruePoint TrueV. The mCT demonstrated modestly higher sensitivity (increase by 19.7 ± 2.8%) and peak noise equivalent count rate (NECR) (increase by 15.5 ± 5.7%) with similar scatter fractions. The energy, time and spatial resolutions for a varying single count rate of up to 55 Mcps resulted in 11.5 ± 0.2% (FWHM), 527.5 ± 4.9 ps (FWHM) and 4.1 ± 0.0 mm (FWHM), respectively. With the addition of TOF, the mCT also produced substantially higher image contrast recovery and signal-to-noise ratios in a clinically-relevant phantom geometry. The benefits of TOF were clearly demonstrated in representative patient images.

  8. Key Ecological Roles for Zoosporic True Fungi in Aquatic Habitats.

    PubMed

    Gleason, Frank H; Scholz, Bettina; Jephcott, Thomas G; van Ogtrop, Floris F; Henderson, Linda; Lilje, Osu; Kittelmann, Sandra; Macarthur, Deborah J

    2017-03-01

    The diversity and abundance of zoosporic true fungi have been analyzed recently using fungal sequence libraries and advances in molecular methods, such as high-throughput sequencing. This review focuses on four evolutionary primitive true fungal phyla: the Aphelidea, Chytridiomycota, Neocallimastigomycota, and Rosellida (Cryptomycota), most species of which are not polycentric or mycelial (filamentous), rather they tend to be primarily monocentric (unicellular). Zoosporic fungi appear to be both abundant and diverse in many aquatic habitats around the world, with abundance often exceeding other fungal phyla in these habitats, and numerous novel genetic sequences identified. Zoosporic fungi are able to survive extreme conditions, such as high and extremely low pH; however, more work remains to be done. They appear to have important ecological roles as saprobes in decomposition of particulate organic substrates, pollen, plant litter, and dead animals; as parasites of zooplankton and algae; as parasites of vertebrate animals (such as frogs); and as symbionts in the digestive tracts of mammals. Some chytrids cause economically important diseases of plants and animals. They regulate sizes of phytoplankton populations. Further metagenomics surveys of aquatic ecosystems are expected to enlarge our knowledge of the diversity of true zoosporic fungi. Coupled with studies on their functional ecology, we are moving closer to unraveling the role of zoosporic fungi in carbon cycling and the impact of climate change on zoosporic fungal populations.

  9. Alternative Measures of Between-Study Heterogeneity in Meta-Analysis: Reducing the Impact of Outlying Studies

    PubMed Central

    Lin, Lifeng; Chu, Haitao; Hodges, James S.

    2016-01-01

    Summary Meta-analysis has become a widely used tool to combine results from independent studies. The collected studies are homogeneous if they share a common underlying true effect size; otherwise, they are heterogeneous. A fixed-effect model is customarily used when the studies are deemed homogeneous, while a random-effects model is used for heterogeneous studies. Assessing heterogeneity in meta-analysis is critical for model selection and decision making. Ideally, if heterogeneity is present, it should permeate the entire collection of studies, instead of being limited to a small number of outlying studies. Outliers can have great impact on conventional measures of heterogeneity and the conclusions of a meta-analysis. However, no widely accepted guidelines exist for handling outliers. This article proposes several new heterogeneity measures. In the presence of outliers, the proposed measures are less affected than the conventional ones. The performance of the proposed and conventional heterogeneity measures are compared theoretically, by studying their asymptotic properties, and empirically, using simulations and case studies. PMID:27167143

  10. The "subjective" pupil old/new effect: is the truth plain to see?

    PubMed

    Montefinese, Maria; Ambrosini, Ettore; Fairfield, Beth; Mammarella, Nicola

    2013-07-01

    Human memory is an imperfect process, prone to distortion and errors that range from minor disturbances to major errors that can have serious consequences on everyday life. In this study, we investigated false remembering of manipulatory verbs using an explicit recognition task and pupillometry. Our results replicated the "classical" pupil old/new effect as well as data in false remembering literature that show how items must be recognize as old in order for the pupil size to increase (e.g., "subjective" pupil old/new effect), even though these items do not necessarily have to be truly old. These findings support the strength-of-memory trace account that affirms that pupil dilation is related to experience rather than to the accuracy of recognition. Moreover, behavioral results showed higher rates of true and false recognitions for manipulatory verbs and a consequent larger pupil diameter, supporting the embodied view of language. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. The Psychological Benefits of Being Authentic on Facebook.

    PubMed

    Grieve, Rachel; Watkinson, Jarrah

    2016-07-01

    Having others acknowledge and validate one's true self is associated with better psychological health. Existing research indicates that an individual's true self may be more readily expressed on Facebook than in person. This study brought together these two premises by investigating for the first time the psychosocial outcomes associated with communicating one's true self on Facebook. Participants (n = 164) completed a personality assessment once as their true self and once as the self they present on Facebook (Facebook self), as well as measures of social connectedness, subjective well-being, depression, anxiety, and stress. Euclidean distances quantified the difference between one's true self and the Facebook self. Hypotheses received partial support. Better coherence between the true self and the Facebook self was associated with better social connectedness and less stress. Two models provided evidence of mediation effects. Findings highlight that authentic self-presentation on Facebook can be associated with positive psychological outcomes.

  12. Value judgments and the true self.

    PubMed

    Newman, George E; Bloom, Paul; Knobe, Joshua

    2014-02-01

    The belief that individuals have a "true self" plays an important role in many areas of psychology as well as everyday life. The present studies demonstrate that people have a general tendency to conclude that the true self is fundamentally good--that is, that deep inside every individual, there is something motivating him or her to behave in ways that are virtuous. Study 1 finds that observers are more likely to see a person's true self reflected in behaviors they deem to be morally good than in behaviors they deem to be bad. Study 2 replicates this effect and demonstrates observers' own moral values influence what they judge to be another person's true self. Finally, Study 3 finds that this normative view of the true self is independent of the particular type of mental state (beliefs vs. feelings) that is seen as responsible for an agent's behavior.

  13. Non-invasive genetic censusing and monitoring of primate populations.

    PubMed

    Arandjelovic, Mimi; Vigilant, Linda

    2018-03-01

    Knowing the density or abundance of primate populations is essential for their conservation management and contextualizing socio-demographic and behavioral observations. When direct counts of animals are not possible, genetic analysis of non-invasive samples collected from wildlife populations allows estimates of population size with higher accuracy and precision than is possible using indirect signs. Furthermore, in contrast to traditional indirect survey methods, prolonged or periodic genetic sampling across months or years enables inference of group membership, movement, dynamics, and some kin relationships. Data may also be used to estimate sex ratios, sex differences in dispersal distances, and detect gene flow among locations. Recent advances in capture-recapture models have further improved the precision of population estimates derived from non-invasive samples. Simulations using these methods have shown that the confidence interval of point estimates includes the true population size when assumptions of the models are met, and therefore this range of population size minima and maxima should be emphasized in population monitoring studies. Innovations such as the use of sniffer dogs or anti-poaching patrols for sample collection are important to ensure adequate sampling, and the expected development of efficient and cost-effective genotyping by sequencing methods for DNAs derived from non-invasive samples will automate and speed analyses. © 2018 Wiley Periodicals, Inc.

  14. Control of social monogamy through aggression in a hermaphroditic shrimp

    PubMed Central

    2011-01-01

    Introduction Sex allocation theory predicts that in small mating groups simultaneous hermaphroditism is the optimal form of gender expression. Under these conditions, male allocation is predicted to be very low and overall per-capita reproductive output maximal. This is particularly true for individuals that live in pairs, but monogamy is highly susceptible to cheating by both partners. However, certain conditions favour social monogamy in hermaphrodites. This study addresses the influence of group size on group stability and moulting cycles in singles, pairs, triplets and quartets of the socially monogamous shrimp Lysmata amboinensis, a protandric simultaneous hermaphrodite. Results The effect of group size was very strong: Exactly one individual in each triplet and exactly two individuals in each quartet were killed in aggressive interactions, resulting in group sizes of two individuals. All killed individuals had just moulted. No mortality occurred in single and pair treatments. The number of moults in the surviving shrimp increased significantly after changing from triplets and quartets to pairs. Conclusion Social monogamy in L. amboinensis is reinforced by aggressive expulsion of supernumerous individuals. We suggest that the high risk of mortality in triplets and quartets results in suppression of moulting in groups larger than two individuals and that the feeding ecology of L. amboinensis favours social monogamy. PMID:22078746

  15. Control of social monogamy through aggression in a hermaphroditic shrimp.

    PubMed

    Wong, Janine Wy; Michiels, Nico K

    2011-11-11

    Sex allocation theory predicts that in small mating groups simultaneous hermaphroditism is the optimal form of gender expression. Under these conditions, male allocation is predicted to be very low and overall per-capita reproductive output maximal. This is particularly true for individuals that live in pairs, but monogamy is highly susceptible to cheating by both partners. However, certain conditions favour social monogamy in hermaphrodites. This study addresses the influence of group size on group stability and moulting cycles in singles, pairs, triplets and quartets of the socially monogamous shrimp Lysmata amboinensis, a protandric simultaneous hermaphrodite. The effect of group size was very strong: Exactly one individual in each triplet and exactly two individuals in each quartet were killed in aggressive interactions, resulting in group sizes of two individuals. All killed individuals had just moulted. No mortality occurred in single and pair treatments. The number of moults in the surviving shrimp increased significantly after changing from triplets and quartets to pairs. Social monogamy in L. amboinensis is reinforced by aggressive expulsion of supernumerous individuals. We suggest that the high risk of mortality in triplets and quartets results in suppression of moulting in groups larger than two individuals and that the feeding ecology of L. amboinensis favours social monogamy.

  16. Observational studies of patients in the emergency department: a comparison of 4 sampling methods.

    PubMed

    Valley, Morgan A; Heard, Kennon J; Ginde, Adit A; Lezotte, Dennis C; Lowenstein, Steven R

    2012-08-01

    We evaluate the ability of 4 sampling methods to generate representative samples of the emergency department (ED) population. We analyzed the electronic records of 21,662 consecutive patient visits at an urban, academic ED. From this population, we simulated different models of study recruitment in the ED by using 2 sample sizes (n=200 and n=400) and 4 sampling methods: true random, random 4-hour time blocks by exact sample size, random 4-hour time blocks by a predetermined number of blocks, and convenience or "business hours." For each method and sample size, we obtained 1,000 samples from the population. Using χ(2) tests, we measured the number of statistically significant differences between the sample and the population for 8 variables (age, sex, race/ethnicity, language, triage acuity, arrival mode, disposition, and payer source). Then, for each variable, method, and sample size, we compared the proportion of the 1,000 samples that differed from the overall ED population to the expected proportion (5%). Only the true random samples represented the population with respect to sex, race/ethnicity, triage acuity, mode of arrival, language, and payer source in at least 95% of the samples. Patient samples obtained using random 4-hour time blocks and business hours sampling systematically differed from the overall ED patient population for several important demographic and clinical variables. However, the magnitude of these differences was not large. Common sampling strategies selected for ED-based studies may affect parameter estimates for several representative population variables. However, the potential for bias for these variables appears small. Copyright © 2012. Published by Mosby, Inc.

  17. Pre and Post-copulatory Selection Favor Similar Genital Phenotypes in the Male Broad Horned Beetle

    PubMed Central

    House, Clarissa M.; Sharma, M. D.; Okada, Kensuke; Hosken, David J.

    2016-01-01

    Sexual selection can operate before and after copulation and the same or different trait(s) can be targeted during these episodes of selection. The direction and form of sexual selection imposed on characters prior to mating has been relatively well described, but the same is not true after copulation. In general, when male–male competition and female choice favor the same traits then there is the expectation of reinforcing selection on male sexual traits that improve competitiveness before and after copulation. However, when male–male competition overrides pre-copulatory choice then the opposite could be true. With respect to studies of selection on genitalia there is good evidence that male genital morphology influences mating and fertilization success. However, whether genital morphology affects reproductive success in more than one context (i.e., mating versus fertilization success) is largely unknown. Here we use multivariate analysis to estimate linear and nonlinear selection on male body size and genital morphology in the flour beetle Gnatocerus cornutus, simulated in a non-competitive (i.e., monogamous) setting. This analysis estimates the form of selection on multiple traits and typically, linear (directional) selection is easiest to detect, while nonlinear selection is more complex and can be stabilizing, disruptive, or correlational. We find that mating generates stabilizing selection on male body size and genitalia, and fertilization causes a blend of directional and stabilizing selection. Differences in the form of selection across these bouts of selection result from a significant alteration of nonlinear selection on body size and a marginally significant difference in nonlinear selection on a component of genital shape. This suggests that both bouts of selection favor similar genital phenotypes, whereas the strong stabilizing selection imposed on male body size during mate acquisition is weak during fertilization. PMID:27371390

  18. Influence of mannitol concentration on the physicochemical, mechanical and pharmaceutical properties of lyophilised mannitol.

    PubMed

    Kaialy, Waseem; Khan, Usman; Mawlud, Shadan

    2016-08-20

    Mannitol is a pharmaceutical excipient that is receiving increased popularity in solid dosage forms. The aim of this study was to provide comparative evaluation on the effect of mannitol concentration on the physicochemical, mechanical, and pharmaceutical properties of lyophilised mannitol. The results showed that the physicochemical, mechanical and pharmaceutical properties of lyophilised mannitol powders are strong functions of mannitol concentration. By decreasing mannitol concentration, the true density, bulk density, cohesivity, flowability, netcharge-to-mass ratio, and relative degree of crystallinity of LM were decreased, whereas the breakability, size distribution, and size homogeneity of lyophilised mannitol particles were increased. The mechanical properties of lyophilised mannitol tablets improved with decreasing mannitol concentration. The use of lyophilised mannitol has profoundly improved the dissolution rate of indomethacin from tablets in comparison to commercial mannitol. This improvement exhibited an increasing trend with decreasing mannitol concentration. In conclusion, mannitols lyophilised from lower concentrations are more desirable in tableting than mannitols from higher concentrations due to their better mechanical and dissolution properties. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Flame-resistant Ca-containing AZ31 magnesium alloy sheets with good mechanical properties fabricated by a combination of strip casting and high-ratio differential speed rolling methods

    NASA Astrophysics Data System (ADS)

    Kim, Y. H.; Kim, W. J.

    2015-03-01

    This study reported that a combination of strip casting and high-ratio differential speed rolling (HRDSR) can produce flame-resistant Mg alloy sheets (0.7 wt%Ca-AZ31: 0.7Ca-AZ31) with good room-temperature mechanical properties and high-temperature formability. HRDSR effectively refined the coarse microstructure of the strip-casting processed 0.7Ca-AZ31 alloy. As the result, the (true) grain size was reduced to as small as 2.7 μm and the (Mg, Al)2Ca phase was broken up to fine particles with an average sizes of 0.5 μm. Due to the advantage of having such a highly refined microstructure, the HRDSR-processed 0.7Ca-AZ31 alloy sheet exhibited a high yield stress over 300 MPa and good superplasticity at elevated temperatures. The deformation mechanism of the fine-grained 0.7Ca-AZ31 alloy in the superplastic regime was identified to be grainboundary-diffusion or lattice-diffusion controlled grain boundary sliding.

  20. Reversing sex steroid deficiency and optimizing skeletal development in the adolescent with gonadal failure.

    PubMed

    Vanderschueren, Dirk; Vandenput, Liesbeth; Boonen, Steven

    2005-01-01

    During puberty, the acquisition of skeletal mass and areal bone mineral density (BMD) mainly reflects an increase in bone size (length and perimeters) and not true volumetric BMD. Sexual dimorphism in bone mass and areal BMD is also explained by differences in bone size (longer and wider bones in males) and not by differences in volumetric BMD. Androgens stimulate skeletal growth by activation of the androgen receptor, whereas estrogens (following aromatization of androgens and stimulation of estrogen receptors) have a biphasic effect on skeletal growth during puberty. Recent evidence from clinical cases has shown that many of the growth-promoting effects of the sex steroids are mediated through estrogens rather than androgens. In addition, skeletal maturation and epiphyseal fusion are also estrogen-dependent in both sexes. Nevertheless, independent actions of androgens in these processes also occur. Both sex steroids maintain volumetric BMD during puberty. Androgens interact with the growth hormone (GH)-insulin-like growth factor-I (IGF-I) axis neonatally, resulting in a sexual dimorphic GH pattern during puberty, whereas estrogens stimulate GH and hereby IGF-I in both sexes. Hypogonadism in adolescents impairs not only bone size but also maintenance of volumetric BMD, hereby severely reducing peak areal BMD. Delayed puberty in boys and Turner's syndrome in women impair both bone length and size, reducing areal BMD. Whether volumetric BMD is also reduced and whether fracture risk is increased in these conditions remains controversial. Replacing sex steroids according to a biphasic pattern (starting at low doses and ending at high-normal doses) seems the safest approach to reach targeted height and to optimize bone development.

  1. Quantum state discrimination bounds for finite sample size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R.; Mosonyi, Milan; Mathematical Institute, Budapest University of Technology and Economics, Egry Jozsef u 1., Budapest 1111

    2012-12-15

    In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein'smore » lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.« less

  2. No Evidence for True Training and Transfer Effects after Inhibitory Control Training in Young Healthy Adults

    ERIC Educational Resources Information Center

    Enge, Sören; Behnke, Alexander; Fleischhauer, Monika; Küttler, Lena; Kliegel, Matthias; Strobel, Alexander

    2014-01-01

    Recent studies reported that training of working memory may improve performance in the trained function and beyond. Other executive functions, however, have been rarely or not yet systematically examined. The aim of this study was to test the effectiveness of inhibitory control (IC) training to produce true training-related function improvements…

  3. Assessment of ecologic regression in the study of lung cancer and indoor radon.

    PubMed

    Stidley, C A; Samet, J M

    1994-02-01

    Ecologic regression studies conducted to assess the cancer risk of indoor radon to the general population are subject to methodological limitations, and they have given seemingly contradictory results. The authors use simulations to examine the effects of two major methodological problems that affect these studies: measurement error and misspecification of the risk model. In a simulation study of the effect of measurement error caused by the sampling process used to estimate radon exposure for a geographic unit, both the effect of radon and the standard error of the effect estimate were underestimated, with greater bias for smaller sample sizes. In another simulation study, which addressed the consequences of uncontrolled confounding by cigarette smoking, even small negative correlations between county geometric mean annual radon exposure and the proportion of smokers resulted in negative average estimates of the radon effect. A third study considered consequences of using simple linear ecologic models when the true underlying model relation between lung cancer and radon exposure is nonlinear. These examples quantify potential biases and demonstrate the limitations of estimating risks from ecologic studies of lung cancer and indoor radon.

  4. Grain-Size-Limited Mobility in Methylammonium Lead Iodide Perovskite Thin Films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reid, Obadiah G.; Yang, Mengjin; Kopidakis, Nikos

    2016-09-09

    We report a systematic study of the gigahertz-frequency charge carrier mobility found in methylammonium lead iodide perovskite films as a function of average grain size using time-resolved microwave conductivity and a single processing chemistry. Our measurements are in good agreement with the Kubo formula for the AC mobility of charges confined within finite grains, suggesting (1) that the surface grains imaged via scanning electron microscopy are representative of the true electronic domain size and not substantially subdivided by twinning or other defects not visible by microscopy and (2) that the time scale of diffusive transport across grain boundaries is muchmore » slower than the period of the microwave field in this measurement (-100 ps). The intrinsic (infinite grain size) minimum mobility extracted form the model is 29 +/- 6 cm2 V-1 s-1 at the probe frequency (8.9 GHz).« less

  5. True external diameter better predicts hemodynamic performance of bioprosthetic aortic valves than the manufacturers' stated size.

    PubMed

    Cevasco, Marisa; Mick, Stephanie L; Kwon, Michael; Lee, Lawrence S; Chen, Edward P; Chen, Frederick Y

    2013-05-01

    Currently, there is no universal standard for sizing bioprosthetic aortic valves. Hence, a standardized comparison was performed to clarify this issue. Every size of four commercially available bioprosthetic aortic valves marketed in the United States (Biocor Supra; Mosaic Ultra; Magna Ease; Mitroflow) was obtained. Subsequently, custom sizers were created that were accurate to 0.0025 mm to represent aortic roots 18 mm through 32 mm, and these were used to measure the external diameter of each valve. Using the effective orifice area (EOA) and transvalvular pressure gradient (TPG) data submitted to the FDA, a comparison was made between the hemodynamic properties of valves with equivalent manufacturer stated sizes and valves with equivalent measured external diameters. Based on manufacturer size alone, the valves at first seemed to be hemodynamically different from each other, with Mitroflow valves appearing to be hemodynamically superior, having a large EOA and equivalent or superior TPG (p < 0.05). However, Mitroflow valves had a larger measured external diameter than the other valves of a given numerical manufacturer size. Valves with equivalent external diameters were then compared, regardless of the stated manufacturer sizes. For truly equivalently sized valves (i.e., by measured external diameter) there was no clear hemodynamic difference. There was no statistical difference in the EOAs between the Biocor Supra, Mosaic Ultra, and Mitroflow valves, and the Magna Ease valve had a statistically smaller EOA (p < 0.05). On comparing the mean TPG, the Biocor Supra and Mitroflow valves had statistically equivalent gradients to each other, as did the Mosaic Ultra and Magna Ease valves. When comparing valves of the same numerical manufacturer size, there appears to be a difference in hemodynamic performance across different manufacturers' valves according to FDA data. However, comparing equivalently measured valves eliminates the differences between valves produced by different manufacturers.

  6. Inverse analysis of turbidites by machine learning

    NASA Astrophysics Data System (ADS)

    Naruse, H.; Nakao, K.

    2017-12-01

    This study aims to propose a method to estimate paleo-hydraulic conditions of turbidity currents from ancient turbidites by using machine-learning technique. In this method, numerical simulation was repeated under various initial conditions, which produces a data set of characteristic features of turbidites. Then, this data set of turbidites is used for supervised training of a deep-learning neural network (NN). Quantities of characteristic features of turbidites in the training data set are given to input nodes of NN, and output nodes are expected to provide the estimates of initial condition of the turbidity current. The optimization of weight coefficients of NN is then conducted to reduce root-mean-square of the difference between the true conditions and the output values of NN. The empirical relationship with numerical results and the initial conditions is explored in this method, and the discovered relationship is used for inversion of turbidity currents. This machine learning can potentially produce NN that estimates paleo-hydraulic conditions from data of ancient turbidites. We produced a preliminary implementation of this methodology. A forward model based on 1D shallow-water equations with a correction of density-stratification effect was employed. This model calculates a behavior of a surge-like turbidity current transporting mixed-size sediment, and outputs spatial distribution of volume per unit area of each grain-size class on the uniform slope. Grain-size distribution was discretized 3 classes. Numerical simulation was repeated 1000 times, and thus 1000 beds of turbidites were used as the training data for NN that has 21000 input nodes and 5 output nodes with two hidden-layers. After the machine learning finished, independent simulations were conducted 200 times in order to evaluate the performance of NN. As a result of this test, the initial conditions of validation data were successfully reconstructed by NN. The estimated values show very small deviation from the true parameters. Comparing to previous inverse modeling of turbidity currents, our methodology is superior especially in the efficiency of computation. Also, our methodology has advantage in extensibility and applicability to various sediment transport processes such as pyroclastic flows or debris flows.

  7. Nitrogen-modified nano-titania: True phase composition, microstructure and visible-light induced photocatalytic NOx abatement

    NASA Astrophysics Data System (ADS)

    Tobaldi, D. M.; Pullar, R. C.; Gualtieri, A. F.; Otero-Irurueta, G.; Singh, M. K.; Seabra, M. P.; Labrincha, J. A.

    2015-11-01

    Titanium dioxide (TiO2) is a popular photocatalyst used for many environmental and anti-pollution applications, but it normally operates under UV light, exploiting ∼5% of the solar spectrum. Nitrification of titania to form N-doped TiO2 has been explored as a way to increase its photocatalytic activity under visible light, and anionic doping is a promising method to enable TiO2 to harvest visible-light by changing its photo-absorption properties. In this paper, we explore the insertion of nitrogen into the TiO2 lattice using our green sol-gel nanosynthesis method, used to create 10 nm TiO2 NPs. Two parallel routes were studied to produce nitrogen-modified TiO2 nanoparticles (NPs), using HNO3+NH3 (acid-precipitated base-peptised) and NH4OH (totally base catalysed) as nitrogen sources. These NPs were thermally treated between 450 and 800 °C. Their true phase composition (crystalline and amorphous phases), as well as their micro-/nanostructure (crystalline domain shape, size and size distribution, edge and screw dislocation density) was fully characterised through advanced X-ray methods (Rietveld-reference intensity ratio, RIR, and whole powder pattern modelling, WPPM). As pollutants, nitrogen oxides (NOx) are of particular concern for human health, so the photocatalytic activity of the NPs was assessed by monitoring NOx abatement, using both solar and white-light (indoor artificial lighting), simulating outdoor and indoor environments, respectively. Results showed that the onset of the anatase-to-rutile phase transformation (ART) occurred at temperatures above 450 °C, and NPs heated to 450 °C possessed excellent photocatalytic activity (PCA) under visible white-light (indoor artificial lighting), with a PCA double than that of the standard P25 TiO2 NPs. However, higher thermal treatment temperatures were found to be detrimental for visible-light photocatalytic activity, due to the effects of four simultaneous occurrences: (i) loss of OH groups and water adsorbed on the photocatalyst surface; (ii) growth of crystalline domain sizes with decrease in specific surface area; (iii) onset and progress of the ART; (iv) the increasing instability of the nitrogen in the titania lattice.

  8. TH-E-BRE-09: TrueBeam Monte Carlo Absolute Dose Calculations Using Monitor Chamber Backscatter Simulations and Linac-Logged Target Current

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A, Popescu I; Lobo, J; Sawkey, D

    2014-06-15

    Purpose: To simulate and measure radiation backscattered into the monitor chamber of a TrueBeam linac; establish a rigorous framework for absolute dose calculations for TrueBeam Monte Carlo (MC) simulations through a novel approach, taking into account the backscattered radiation and the actual machine output during beam delivery; improve agreement between measured and simulated relative output factors. Methods: The ‘monitor backscatter factor’ is an essential ingredient of a well-established MC absolute dose formalism (the MC equivalent of the TG-51 protocol). This quantity was determined for the 6 MV, 6X FFF, and 10X FFF beams by two independent Methods: (1) MC simulationsmore » in the monitor chamber of the TrueBeam linac; (2) linac-generated beam record data for target current, logged for each beam delivery. Upper head MC simulations used a freelyavailable manufacturer-provided interface to a cloud-based platform, allowing use of the same head model as that used to generate the publicly-available TrueBeam phase spaces, without revealing the upper head design. The MC absolute dose formalism was expanded to allow direct use of target current data. Results: The relation between backscatter, number of electrons incident on the target for one monitor unit, and MC absolute dose was analyzed for open fields, as well as a jaw-tracking VMAT plan. The agreement between the two methods was better than 0.15%. It was demonstrated that the agreement between measured and simulated relative output factors improves across all field sizes when backscatter is taken into account. Conclusion: For the first time, simulated monitor chamber dose and measured target current for an actual TrueBeam linac were incorporated in the MC absolute dose formalism. In conjunction with the use of MC inputs generated from post-delivery trajectory-log files, the present method allows accurate MC dose calculations, without resorting to any of the simplifying assumptions previously made in the TrueBeam MC literature. This work has been partially funded by Varian Medical Systems.« less

  9. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    USGS Publications Warehouse

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.

  10. Estimating the alcohol-breast cancer association: a comparison of diet diaries, FFQs and combined measurements.

    PubMed

    Keogh, Ruth H; Park, Jin Young; White, Ian R; Lentjes, Marleen A H; McTaggart, Alison; Bhaniani, Amit; Cairns, Benjamin J; Key, Timothy J; Greenwood, Darren C; Burley, Victoria J; Cade, Janet E; Dahm, Christina C; Pot, Gerda K; Stephen, Alison M; Masset, Gabriel; Brunner, Eric J; Khaw, Kay-Tee

    2012-07-01

    The alcohol-breast cancer association has been established using alcohol intake measurements from Food Frequency Questionnaires (FFQ). For some nutrients diet diary measurements are more highly correlated with true intake compared with FFQ measurements, but it is unknown whether this is true for alcohol. A case-control study (656 breast cancer cases, 1905 matched controls) was sampled from four cohorts in the UK Dietary Cohort Consortium. Alcohol intake was measured prospectively using FFQs and 4- or 7-day diet diaries. Both relied on fixed portion sizes allocated to given beverage types, but those used to obtain FFQ measurements were lower. FFQ measurements were therefore on average lower and to enable fair comparison the FFQ was "calibrated" using diet diary portion sizes. Diet diaries gave more zero measurements, demonstrating the challenge of distinguishing never-from episodic-consumers using short term instruments. To use all information, two combined measurements were calculated. The first is an average of the two measurements with special treatment of zeros. The second is the expected true intake given both measurements, calculated using a measurement error model. After confounder adjustment the odds ratio (OR) per 10 g/day of alcohol intake was 1.05 (95 % CI 0.98, 1.13) using diet diaries, and 1.13 (1.02, 1.24) using FFQs. The calibrated FFQ measurement and combined measurements 1 and 2 gave ORs 1.10 (1.03, 1.18), 1.09 (1.01, 1.18), 1.09 (0.99,1.20), respectively. The association was modified by HRT use, being stronger among users versus non-users. In summary, using an alcohol measurement from a diet diary at one time point gave attenuated associations compared with FFQ.

  11. Sensitivity and Specificity of Interictal EEG-fMRI for Detecting the Ictal Onset Zone at Different Statistical Thresholds

    PubMed Central

    Tousseyn, Simon; Dupont, Patrick; Goffin, Karolien; Sunaert, Stefan; Van Paesschen, Wim

    2014-01-01

    There is currently a lack of knowledge about electroencephalography (EEG)-functional magnetic resonance imaging (fMRI) specificity. Our aim was to define sensitivity and specificity of blood oxygen level dependent (BOLD) responses to interictal epileptic spikes during EEG-fMRI for detecting the ictal onset zone (IOZ). We studied 21 refractory focal epilepsy patients who had a well-defined IOZ after a full presurgical evaluation and interictal spikes during EEG-fMRI. Areas of spike-related BOLD changes overlapping the IOZ in patients were considered as true positives; if no overlap was found, they were treated as false-negatives. Matched healthy case-controls had undergone similar EEG-fMRI in order to determine true-negative and false-positive fractions. The spike-related regressor of the patient was used in the design matrix of the healthy case-control. Suprathreshold BOLD changes in the brain of controls were considered as false positives, absence of these changes as true negatives. Sensitivity and specificity were calculated for different statistical thresholds at the voxel level combined with different cluster size thresholds and represented in receiver operating characteristic (ROC)-curves. Additionally, we calculated the ROC-curves based on the cluster containing the maximal significant activation. We achieved a combination of 100% specificity and 62% sensitivity, using a Z-threshold in the interval 3.4–3.5 and cluster size threshold of 350 voxels. We could obtain higher sensitivity at the expense of specificity. Similar performance was found when using the cluster containing the maximal significant activation. Our data provide a guideline for different EEG-fMRI settings with their respective sensitivity and specificity for detecting the IOZ. The unique cluster containing the maximal significant BOLD activation was a sensitive and specific marker of the IOZ. PMID:25101049

  12. Scale dependence of the 200-mb divergence inferred from EOLE data.

    NASA Technical Reports Server (NTRS)

    Morel, P.; Necco, G.

    1973-01-01

    The EOLE experiment with 480 constant-volume balloons distributed over the Southern Hemisphere approximately at the 200-mb level, has provided a unique, highly accurate set of tracer trajectories in the general westerly circulation. The trajectories of neighboring balloons are analyzed to estimate the horizontal divergence from the Lagrangian derivative of the area of one cluster. The variance of the divergence estimates results from two almost comparable effects: the true divergence of the horizontal flow and eddy diffusion due to small-scale, two-dimensional turbulence. Taking this into account, the rms divergence is found to be of the order of 0.00001 per sec and decreases logarithmically with cluster size. This scale dependence is shown to be consistent with the quasi-geostrophic turbulence model of the general circulation in midlatitudes.

  13. Chirality and gravitational parity violation.

    PubMed

    Bargueño, Pedro

    2015-06-01

    In this review, parity-violating gravitational potentials are presented as possible sources of both true and false chirality. In particular, whereas phenomenological long-range spin-dependent gravitational potentials contain both truly and falsely chiral terms, it is shown that there are models that extend general relativity including also coupling of fermionic degrees of freedom to gravity in the presence of torsion, which give place to short-range truly chiral interactions similar to that usually considered in molecular physics. Physical mechanisms which give place to gravitational parity violation together with the expected size of the effects and their experimental constraints are discussed. Finally, the possible role of parity-violating gravity in the origin of homochirality and a road map for future research works in quantum chemistry is presented. © 2015 Wiley Periodicals, Inc.

  14. The size-line width relation and the mass of molecular hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Issa, M.; Maclaren, I.; Wolfendale, A. W.

    Some difficulties associated with the problem of cloud definition are considered, with particular regard to the crowded distribution of clouds and the difficulty of choosing an appropriate boundary in such circumstances. A number of tests carried out on the original data suggest that the delta(v) - S relation found by Solomon et al. (1987) is not a genuine reflection of the dynamical state of Giant Molecular Clouds. The Solomon et al. parameters, are insensitive to the actual cloud properties and are unable to distinguish true clouds from the consequences of sampling any crowded region of emission down to a lowmore » threshold temperature. The overall effect of such problems is to overestimate both the masses of Giant Molecular Clouds and the number of very large clouds. 24 refs.« less

  15. Toehold-mediated internal control to probe the near-field interaction between the metallic nanoparticle and the fluorophore

    NASA Astrophysics Data System (ADS)

    Ang, Y. S.; Yung, L. Y. L.

    2014-10-01

    Metallic nanoparticles (MNPs) are known to alter the emission of vicinal fluorophores through the near-field interaction, leading to either fluorescence quenching or enhancement. Much ambiguity remains in the experimental outcome of such a near-field interaction, particularly for bulk colloidal solution. It is hypothesized that the strong far-field interference from the inner filter effect of the MNPs could mask the true near-field MNP-fluorophore interaction significantly. Thus, in this work, a reliable internal control capable of decoupling the near-field interaction from far-field interference is established by the use of the DNA toehold concept to mediate the in situ assembly and disassembly of the MNP-fluorophore conjugate. A model gold nanoparticle (AuNP)-Cy3 system is used to investigate our proposed toehold-mediated internal control system. The maximum fluorescence enhancement is obtained for large-sized AuNP (58 nm) separated from Cy3 at an intermediate distance of 6.8 nm, while fluorescence quenching is observed for smaller-sized AuNP (11 nm and 23 nm), which is in agreement with the theoretical values reported in the literature. This work shows that the toehold-mediated internal control design can serve as a central system for evaluating the near-field interaction of other MNP-fluorophore combinations and facilitate the rational design of specific MNP-fluorophore systems for various applications.Metallic nanoparticles (MNPs) are known to alter the emission of vicinal fluorophores through the near-field interaction, leading to either fluorescence quenching or enhancement. Much ambiguity remains in the experimental outcome of such a near-field interaction, particularly for bulk colloidal solution. It is hypothesized that the strong far-field interference from the inner filter effect of the MNPs could mask the true near-field MNP-fluorophore interaction significantly. Thus, in this work, a reliable internal control capable of decoupling the near-field interaction from far-field interference is established by the use of the DNA toehold concept to mediate the in situ assembly and disassembly of the MNP-fluorophore conjugate. A model gold nanoparticle (AuNP)-Cy3 system is used to investigate our proposed toehold-mediated internal control system. The maximum fluorescence enhancement is obtained for large-sized AuNP (58 nm) separated from Cy3 at an intermediate distance of 6.8 nm, while fluorescence quenching is observed for smaller-sized AuNP (11 nm and 23 nm), which is in agreement with the theoretical values reported in the literature. This work shows that the toehold-mediated internal control design can serve as a central system for evaluating the near-field interaction of other MNP-fluorophore combinations and facilitate the rational design of specific MNP-fluorophore systems for various applications. Electronic supplementary information (ESI) available: DNA sequences, size distribution analysis, photobleaching background and optical characterization. See DOI: 10.1039/c4nr03643c

  16. Dissatisfaction with own body makes patients with eating disorders more sensitive to pain

    PubMed Central

    Yamamotova, Anna; Bulant, Josef; Bocek, Vaclav; Papezova, Hana

    2017-01-01

    Body image represents a multidimensional concept including body image evaluation and perception of body appearance. Disturbances of body image perception are considered to be one of the central aspects of anorexia nervosa and bulimia nervosa. There is growing evidence that body image distortion can be associated with changes in pain perception. The aim of our study was to examine the associations between body image perception, body dissatisfaction, and nociception in women with eating disorders and age-matched healthy control women. We measured body dissatisfaction and pain sensitivity in 61 patients with Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition diagnoses of eating disorders (31 anorexia nervosa and 30 bulimia nervosa) and in 30 healthy women. Thermal pain threshold latencies were evaluated using an analgesia meter and body image perception and body dissatisfaction were assessed using Anamorphic Micro software (digital pictures of their own body distorted into larger-body and thinner-body images). Patients with eating disorders overestimated their body size in comparison with healthy controls, but the two groups did not differ in body dissatisfaction. In anorexia and bulimia patient groups, body dissatisfaction (calculated in pixels as desired size/true image size) correlated with pain threshold latencies (r=0.55, p=0.001), while between body image perception (determined as estimation size/true image size) and pain threshold, no correlation was found. Thus, we demonstrated that in patients with eating disorders, pain perception is significantly associated with emotional contrary to sensory (visual) processing of one’s own body image. The more the patients desired to be thin, the more pain-sensitive they were. Our findings based on some shared mechanisms of body dissatisfaction and pain perception support the significance of negative emotions specific for eating disorders and contribute to better understanding of the psychosomatic characteristics of this spectrum of illnesses. PMID:28761371

  17. Monte Carlo modeling of HD120 multileaf collimator on Varian TrueBeam linear accelerator for verification of 6X and 6X FFF VMAT SABR treatment plans

    PubMed Central

    Gete, Ermias; Duzenli, Cheryl; Teke, Tony

    2014-01-01

    A Monte Carlo (MC) validation of the vendor‐supplied Varian TrueBeam 6 MV flattened (6X) phase‐space file and the first implementation of the Siebers‐Keall MC MLC model as applied to the HD120 MLC (for 6X flat and 6X flattening filterfree (6X FFF) beams) are described. The MC model is validated in the context of VMAT patient‐specific quality assurance. The Monte Carlo commissioning process involves: 1) validating the calculated open‐field percentage depth doses (PDDs), profiles, and output factors (OF), 2) adapting the Siebers‐Keall MLC model to match the new HD120‐MLC geometry and material composition, 3) determining the absolute dose conversion factor for the MC calculation, and 4) validating this entire linac/MLC in the context of dose calculation verification for clinical VMAT plans. MC PDDs for the 6X beams agree with the measured data to within 2.0% for field sizes ranging from 2 × 2 to 40 × 40 cm2. Measured and MC profiles show agreement in the 50% field width and the 80%‐20% penumbra region to within 1.3 mm for all square field sizes. MC OFs for the 2 to 40 cm2 square fields agree with measurement to within 1.6%. Verification of VMAT SABR lung, liver, and vertebra plans demonstrate that measured and MC ion chamber doses agree within 0.6% for the 6X beam and within 2.0% for the 6X FFF beam. A 3D gamma factor analysis demonstrates that for the 6X beam, > 99% of voxels meet the pass criteria (3%/3 mm). For the 6X FFF beam, > 94% of voxels meet this criteria. The TrueBeam accelerator delivering 6X and 6X FFF beams with the HD120 MLC can be modeled in Monte Carlo to provide an independent 3D dose calculation for clinical VMAT plans. This quality assurance tool has been used clinically to verify over 140 6X and 16 6X FFF TrueBeam treatment plans. PACS number: 87.55.K‐ PMID:24892341

  18. Positive self-statements: power for some, peril for others.

    PubMed

    Wood, Joanne V; Perunovic, W Q Elaine; Lee, John W

    2009-07-01

    Positive self-statements are widely believed to boost mood and self-esteem, yet their effectiveness has not been demonstrated. We examined the contrary prediction that positive self-statements can be ineffective or even harmful. A survey study confirmed that people often use positive self-statements and believe them to be effective. Two experiments showed that among participants with low self-esteem, those who repeated a positive self-statement ("I'm a lovable person") or who focused on how that statement was true felt worse than those who did not repeat the statement or who focused on how it was both true and not true. Among participants with high self-esteem, those who repeated the statement or focused on how it was true felt better than those who did not, but to a limited degree. Repeating positive self-statements may benefit certain people, but backfire for the very people who "need" them the most.

  19. Microbubble gas volume: A unifying dose parameter in blood-brain barrier opening by focused ultrasound.

    PubMed

    Song, Kang-Ho; Fan, Alexander C; Hinkle, Joshua J; Newman, Joshua; Borden, Mark A; Harvey, Brandon K

    2017-01-01

    Focused ultrasound with microbubbles is being developed to transiently, locally and noninvasively open the blood-brain barrier (BBB) for improved pharmaceutical delivery. Prior work has demonstrated that, for a given concentration dose, microbubble size affects both the intravascular circulation persistence and extent of BBB opening. When matched to gas volume dose, however, the circulation half-life was found to be independent of microbubble size. In order to determine whether this holds true for BBB opening as well, we independently measured the effects of microbubble size (2 vs. 6 µm diameter) and concentration, covering a range of overlapping gas volume doses (1-40 µL/kg). We first demonstrated precise targeting and a linear dose-response of Evans Blue dye extravasation to the rat striatum for a set of constant microbubble and ultrasound parameters. We found that dye extravasation increased linearly with gas volume dose, with data points from both microbubble sizes collapsing to a single line. A linear trend was observed for both the initial sonication (R 2 =0.90) and a second sonication on the contralateral side (R 2 =0.68). Based on these results, we conclude that microbubble gas volume dose, not size, determines the extent of BBB opening by focused ultrasound (1 MHz, ~0.5 MPa at the focus). This result may simplify planning for focused ultrasound treatments by constraining the protocol to a single microbubble parameter - gas volume dose - which gives equivalent results for varying size distributions. Finally, using optimal parameters determined for Evan Blue, we demonstrated gene delivery and expression using a viral vector, dsAAV1-CMV-EGFP, one week after BBB disruption, which allowed us to qualitatively evaluate neuronal health.

  20. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    PubMed

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. Refinement of Ferrite Grain Size near the Ultrafine Range by Multipass, Thermomechanical Compression

    NASA Astrophysics Data System (ADS)

    Patra, S.; Neogy, S.; Kumar, Vinod; Chakrabarti, D.; Haldar, A.

    2012-11-01

    Plane-strain compression testing was carried out on a Nb-Ti-V microalloyed steel, in a GLEEBLE3500 simulator using a different amount of roughing, intermediate, and finishing deformation over the temperature range of 1373 K to 1073 K (1100 °C to 800 °C). A decrease in soaking temperature from 1473 K to 1273 K (1200 °C to 1000 °C) offered marginal refinement in the ferrite ( α) grain size from 7.8 to 6.6 μm. Heavy deformation using multiple passes between A e3 and A r3 with true strain of 0.8 to 1.2 effectively refined the α grain size (4.1 to 3.2 μm) close to the ultrafine size by dynamic-strain-induced austenite ( γ) → ferrite ( α) transformation (DSIT). The intensities of microstructural banding, pearlite fraction in the microstructure (13 pct), and fraction of the harmful "cube" texture component (5 pct) were reduced with the increase in finishing deformation. Simultaneously, the fractions of high-angle (>15 deg misorientation) boundaries (75 to 80 pct), beneficial gamma-fiber (ND//<111>) texture components, along with {332}<133> and {554}<225> components were increased. Grain refinement and the formation of small Fe3C particles (50- to 600-nm size) increased the hardness of the deformed samples (184 to 192 HV). For the same deformation temperature [1103 K (830 °C)], the difference in α-grain sizes obtained after single-pass (2.7 μm) and multipass compression (3.2 μm) can be explained in view of the static- and dynamic-strain-induced γ → α transformation, strain partitioning between γ and α, dynamic recovery and dynamic recrystallization of the deformed α, and α-grain growth during interpass intervals.

  2. Is psychology suffering from a replication crisis? What does "failure to replicate" really mean?

    PubMed

    Maxwell, Scott E; Lau, Michael Y; Howard, George S

    2015-09-01

    Psychology has recently been viewed as facing a replication crisis because efforts to replicate past study findings frequently do not show the same result. Often, the first study showed a statistically significant result but the replication does not. Questions then arise about whether the first study results were false positives, and whether the replication study correctly indicates that there is truly no effect after all. This article suggests these so-called failures to replicate may not be failures at all, but rather are the result of low statistical power in single replication studies, and the result of failure to appreciate the need for multiple replications in order to have enough power to identify true effects. We provide examples of these power problems and suggest some solutions using Bayesian statistics and meta-analysis. Although the need for multiple replication studies may frustrate those who would prefer quick answers to psychology's alleged crisis, the large sample sizes typically needed to provide firm evidence will almost always require concerted efforts from multiple investigators. As a result, it remains to be seen how many of the recently claimed failures to replicate will be supported or instead may turn out to be artifacts of inadequate sample sizes and single study replications. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  3. Influence of carbon nanoparticle modification on the mechanical and electrical properties of epoxy in small volumes.

    PubMed

    Leopold, Christian; Augustin, Till; Schwebler, Thomas; Lehmann, Jonas; Liebig, Wilfried V; Fiedler, Bodo

    2017-11-15

    The influence of nanoparticle morphology and filler content on the mechanical and electrical properties of carbon nanoparticle modified epoxy is investigated regarding small volumes. Three types of particles, representing spherical, tubular and layered morphologies are used. A clear size effect of increasing true failure strength with decreasing volume is found for neat and carbon black modified epoxy. Carbon nanotube (CNT) modified epoxy exhibits high potential for strength increase, but dispersion and purity are critical. In few layer graphene modified epoxy, particles are larger than statistically distributed defects and initiate cracks, counteracting any size effect. Different toughness increasing mechanisms on the nano- and micro-scale depending on particle morphology are discussed based on scanning electron microscopy images. Electrical percolation thresholds in the small volume fibres are significantly higher compared to bulk volume, with CNT being found to be the most suitable morphology to form electrical conductive paths. Good correlation between electrical resistance change and stress strain behaviour under tensile loads is observed. The results show the possibility to detect internal damage in small volumes by measuring electrical resistance and therefore indicate to the high potential for using CNT modified polymers in fibre reinforced plastics as a multifunctional, self-monitoring material with improved mechanical properties. Copyright © 2017. Published by Elsevier Inc.

  4. Interpreting incremental value of markers added to risk prediction models.

    PubMed

    Pencina, Michael J; D'Agostino, Ralph B; Pencina, Karol M; Janssens, A Cecile J W; Greenland, Philip

    2012-09-15

    The discrimination of a risk prediction model measures that model's ability to distinguish between subjects with and without events. The area under the receiver operating characteristic curve (AUC) is a popular measure of discrimination. However, the AUC has recently been criticized for its insensitivity in model comparisons in which the baseline model has performed well. Thus, 2 other measures have been proposed to capture improvement in discrimination for nested models: the integrated discrimination improvement and the continuous net reclassification improvement. In the present study, the authors use mathematical relations and numerical simulations to quantify the improvement in discrimination offered by candidate markers of different strengths as measured by their effect sizes. They demonstrate that the increase in the AUC depends on the strength of the baseline model, which is true to a lesser degree for the integrated discrimination improvement. On the other hand, the continuous net reclassification improvement depends only on the effect size of the candidate variable and its correlation with other predictors. These measures are illustrated using the Framingham model for incident atrial fibrillation. The authors conclude that the increase in the AUC, integrated discrimination improvement, and net reclassification improvement offer complementary information and thus recommend reporting all 3 alongside measures characterizing the performance of the final model.

  5. Experimental verification of cleavage characteristic stress vs grain size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, W.; Li, D.; Yao, M.

    Instead of the accepted cleavage fracture stress [sigma][sub f] proposed by Knott et al, a new parameter S[sub co], named as ''cleavage characteristic stress,'' has been recently recommended to characterize the microscopic resistance to cleavage fracture. To give a definition, S[sub co] is the fracture stress at the brittle/ductile transition temperature of steels in plain tension, below which the yield strength approximately equals the true fracture stress combined with an abrupt curtailment of ductility. By considering a single-grain microcrack arrested at a boundary, Huang and Yao set up an expression of S[sub co] as a function of grain size. Themore » present work was arranged to provide an experimental verification of S[sub co] vs grain size.« less

  6. Axillary lymph node metastases in patients with breast carcinomas: assessment with nonenhanced versus uspio-enhanced MR imaging.

    PubMed

    Memarsadeghi, Mazda; Riedl, Christopher C; Kaneider, Andreas; Galid, Arik; Rudas, Margaretha; Matzek, Wolfgang; Helbich, Thomas H

    2006-11-01

    To prospectively assess the accuracy of nonenhanced versus ultrasmall superparamagnetic iron oxide (USPIO)-enhanced magnetic resonance (MR) imaging for depiction of axillary lymph node metastases in patients with breast carcinoma, with histopathologic findings as reference standard. The study was approved by the university ethics committee; written informed consent was obtained. Twenty-two women (mean age, 60 years; range, 40-79 years) with breast carcinomas underwent nonenhanced and USPIO-enhanced (2.6 mg of iron per kilogram of body weight intravenously administered) transverse T1-weighted and transverse and sagittal T2-weighted and T2*-weighted MR imaging in adducted and elevated arm positions. Two experienced radiologists, blinded to the histopathologic findings, analyzed images of axillary lymph nodes with regard to size, morphologic features, and USPIO uptake. A third independent radiologist served as a tiebreaker if consensus between two readers could not be reached. Visual and quantitative analyses of MR images were performed. Sensitivity, specificity, and accuracy values were calculated. To assess the effect of USPIO after administration, signal-to-noise ratio (SNR) changes were statistically analyzed with repeated-measurements analysis of variance (mixed model) for MR sequences. At nonenhanced MR imaging, of 133 lymph nodes, six were rated as true-positive, 99 as true-negative, 23 as false-positive, and five as false-negative. At USPIO-enhanced MR imaging, 11 lymph nodes were rated as true-positive, 120 as true-negative, two as false-positive, and none as false-negative. In two metastatic lymph nodes in two patients with more than one metastatic lymph node, a consensus was not reached. USPIO-enhanced MR imaging revealed a node-by-node sensitivity, specificity, and accuracy of 100%, 98%, and 98%, respectively. At USPIO-enhanced MR imaging, no metastatic lymph nodes were missed on a patient-by-patient basis. Significant interactions indicating differences in the decrease of SNR values for metastatic and nonmetastatic lymph nodes were found for all sequences (P < .001 to P = .022). USPIO-enhanced MR imaging appears valuable for assessment of axillary lymph node metastases in patients with breast carcinomas and is superior to nonenhanced MR imaging.

  7. The influence of radiographic viewing perspective and demographics on the Critical Shoulder Angle

    PubMed Central

    Suter, Thomas; Popp, Ariane Gerber; Zhang, Yue; Zhang, Chong; Tashjian, Robert Z.; Henninger, Heath B.

    2014-01-01

    Background Accurate assessment of the critical shoulder angle (CSA) is important in clinical evaluation of degenerative rotator cuff tears. This study analyzed the influence of radiographic viewing perspective on the CSA, developed a classification system to identify malpositioned radiographs, and assessed the relationship between the CSA and demographic factors. Methods Glenoid height, width and retroversion were measured on 3D CT reconstructions of 68 cadaver scapulae. A digitally reconstructed radiograph was aligned perpendicular to the scapular plane, and retroversion was corrected to obtain a true antero-posterior (AP) view. In 10 scapulae, incremental anteversion/retroversion and flexion/extension views were generated. The CSA was measured and a clinically applicable classification system was developed to detect views with >2° change in CSA versus true AP. Results The average CSA was 33±4°. Intra- and inter-observer reliability was high (ICC≥0.81) but decreased with increasing viewing angle. Views beyond 5° anteversion, 8° retroversion, 15° flexion and 26° extension resulted in >2° deviation of the CSA compared to true AP. The classification system was capable of detecting aberrant viewing perspectives with sensitivity of 95% and specificity of 53%. Correlations between glenoid size and CSA were small (R≤0.3), and CSA did not vary by gender (p=0.426) or side (p=0.821). Conclusions The CSA was most susceptible to malposition in ante/retroversion. Deviations as little as 5° in anteversion resulted in a CSA >2° from true AP. A new classification system refines the ability to collect true AP radiographs of the scapula. The CSA was unaffected by demographic factors. PMID:25591458

  8. Evaluation of a risk-based environmental hot spot delineation algorithm.

    PubMed

    Sinha, Parikhit; Lambert, Michael B; Schew, William A

    2007-10-22

    Following remedial investigations of hazardous waste sites, remedial strategies may be developed that target the removal of "hot spots," localized areas of elevated contamination. For a given exposure area, a hot spot may be defined as a sub-area that causes risks for the whole exposure area to be unacceptable. The converse of this statement may also apply: when a hot spot is removed from within an exposure area, risks for the exposure area may drop below unacceptable thresholds. The latter is the motivation for a risk-based approach to hot spot delineation, which was evaluated using Monte Carlo simulation. Random samples taken from a virtual site ("true site") were used to create an interpolated site. The latter was gridded and concentrations from the center of each grid box were used to calculate 95% upper confidence limits on the mean site contaminant concentration and corresponding hazard quotients for a potential receptor. Grid cells with the highest concentrations were removed and hazard quotients were recalculated until the site hazard quotient dropped below the threshold of 1. The grid cells removed in this way define the spatial extent of the hot spot. For each of the 100,000 Monte Carlo iterations, the delineated hot spot was compared to the hot spot in the "true site." On average, the algorithm was able to delineate hot spots that were collocated with and equal to or greater in size than the "true hot spot." When delineated hot spots were mapped onto the "true site," setting contaminant concentrations in the mapped area to zero, the hazard quotients for these "remediated true sites" were on average within 5% of the acceptable threshold of 1.

  9. Keeping Teachers on the Job Costs Less than Advertised. Policy Memorandum #168

    ERIC Educational Resources Information Center

    Bivens, Josh

    2010-01-01

    A misplaced obsession with the size of federal budget deficits remains the single biggest obstacle to enacting new measures to create jobs on a scale commensurate with the crisis in the American labor market. Even assuming that budget scoring rules can't be changed, at the very least policy makers should be aware of the true impact a given piece…

  10. A conceptual guide to detection probability for point counts and other count-based survey methods

    Treesearch

    D. Archibald McCallum

    2005-01-01

    Accurate and precise estimates of numbers of animals are vitally needed both to assess population status and to evaluate management decisions. Various methods exist for counting birds, but most of those used with territorial landbirds yield only indices, not true estimates of population size. The need for valid density estimates has spawned a number of models for...

  11. Create the Plan, Work the Plan: A Look at Why the Independent Business Owner Has Trouble Calling a Franchisee a True Entrepreneur

    ERIC Educational Resources Information Center

    Buzza, John; Mosca, Joseph B.

    2009-01-01

    Our complex and intricate economic system is comprised of many different types and sizes of businesses, ranging from big corporations to small individually owned entities. The genre of business is and can be profoundly complex. Independence can vary from small single person mom and pops to consortiums of multiple partners, silent partners and…

  12. CCIR for Complex and Uncertain Environments

    DTIC Science & Technology

    2007-05-01

    these purposes. Doctrine gives such a wide variety of reasons for using CCIR that the concept seems unfocused, giving the commander no true criteria...complex. The true effect that these issues have on CCIR is open to a considerable amount of debate. Every commander has the freedom to develop his...operations. He set a precedent that seems to have held true through the entire history of intelligence requirements – every step in the development

  13. SU-D-213-02: Characterization of the Effect of a New Commercial Transmission Detector On Radiotherapy Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, J; Morin, O

    2015-06-15

    Purpose: To evaluate the influence of a new commercial transmission detector on radiotherapy beams of various energies. Methods: A transmission detector designed for online treatment monitoring was characterized on a TrueBeam STx linear accelerator with 6MV, 6FFF, 10MV, and 10FFF beams. Measurements of beam characteristics including percentage depth doses (PDDs), inplane and crossplane off-axis profiles at different depths, transmission factors, and skin dose were acquired at field sizes of 3×3cm, 5×5m, 10×10cm, and 20×20cm at 100cm and 80cm source-to-surface distance (SSD). All measurements were taken with and without the transmission detector in the path of the beam. A CC04 chambermore » was used for all profile and transmission factor measurements. Skin dose was assessed at 100cm, 90cm, and 80cm SSD and using a variety of detectors (Roos and Markus parallel-plate chambers, and OSLD). Results: The PDDs showed small differences between the unperturbed and perturbed beams for both 100cm and 80cm SSD (≤4mm dmax difference and <1.2% average profile difference). The differences were larger for the flattened beams and at larger field sizes. The off-axis profiles showed similar trends. The penumbras looked similar with and without the transmission detector. Comparisons in the central 80% of the profile showed a maximum average (maximum) profile difference between all field sizes of 0.756% (1.535%) and 0.739% (3.682%) for 100cm and 80cm SSD, respectively. The average measured skin dose at 100cm (80cm) SSD for 10×10cm field size was <4% (<35%) dose increase for all energies. For 20×20cm field size, this value increased to <10% (≤45%). Conclusion: The transmission detector has minimal effect on the clinically relevant radiotherapy beams for IMRT and VMAT (field sizes 10×10cm and less). For larger field sizes, some perturbations are observable which would need to be assessed for clinical impact. The authors of this publication has research support from IBA Dosimetry.« less

  14. A Novel Pairwise Comparison-Based Method to Determine Radiation Dose Reduction Potentials of Iterative Reconstruction Algorithms, Exemplified Through Circle of Willis Computed Tomography Angiography.

    PubMed

    Ellmann, Stephan; Kammerer, Ferdinand; Brand, Michael; Allmendinger, Thomas; May, Matthias S; Uder, Michael; Lell, Michael M; Kramer, Manuel

    2016-05-01

    The aim of this study was to determine the dose reduction potential of iterative reconstruction (IR) algorithms in computed tomography angiography (CTA) of the circle of Willis using a novel method of evaluating the quality of radiation dose-reduced images. This study relied on ReconCT, a proprietary reconstruction software that allows simulating CT scans acquired with reduced radiation dose based on the raw data of true scans. To evaluate the performance of ReconCT in this regard, a phantom study was performed to compare the image noise of true and simulated scans within simulated vessels of a head phantom. That followed, 10 patients scheduled for CTA of the circle of Willis were scanned according to our institute's standard protocol (100 kV, 145 reference mAs). Subsequently, CTA images of these patients were reconstructed as either a full-dose weighted filtered back projection or with radiation dose reductions down to 10% of the full-dose level and Sinogram-Affirmed Iterative Reconstruction (SAFIRE) with either strength 3 or 5. Images were marked with arrows pointing on vessels of different sizes, and image pairs were presented to observers. Five readers assessed image quality with 2-alternative forced choice comparisons. In the phantom study, no significant differences were observed between the noise levels of simulated and true scans in filtered back projection, SAFIRE 3, and SAFIRE 5 reconstructions.The dose reduction potential for patient scans showed a strong dependence on IR strength as well as on the size of the vessel of interest. Thus, the potential radiation dose reductions ranged from 84.4% for the evaluation of great vessels reconstructed with SAFIRE 5 to 40.9% for the evaluation of small vessels reconstructed with SAFIRE 3. This study provides a novel image quality evaluation method based on 2-alternative forced choice comparisons. In CTA of the circle of Willis, higher IR strengths and greater vessel sizes allowed higher degrees of radiation dose reduction.

  15. Verification of ARMA identification for modelling temporal correlation of GPS observations using the toolbox ARMASA

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoguang; Mayer, Michael; Heck, Bernhard

    2010-05-01

    One essential deficiency of the stochastic model used in many GNSS (Global Navigation Satellite Systems) software products consists in neglecting temporal correlation of GNSS observations. Analysing appropriately detrended time series of observation residuals resulting from GPS (Global Positioning System) data processing, the temporal correlation behaviour of GPS observations can be sufficiently described by means of so-called autoregressive moving average (ARMA) processes. Using the toolbox ARMASA which is available free of charge in MATLAB® Central (open exchange platform for the MATLAB® and SIMULINK® user community), a well-fitting time series model can be identified automatically in three steps. Firstly, AR, MA, and ARMA models are computed up to some user-specified maximum order. Subsequently, for each model type, the best-fitting model is selected using the combined (for AR processes) resp. generalised (for MA and ARMA processes) information criterion. The final model identification among the best-fitting AR, MA, and ARMA models is performed based on the minimum prediction error characterising the discrepancies between the given data and the fitted model. The ARMA coefficients are computed using Burg's maximum entropy algorithm (for AR processes), Durbin's first (for MA processes) and second (for ARMA processes) methods, respectively. This paper verifies the performance of the automated ARMA identification using the toolbox ARMASA. For this purpose, a representative data base is generated by means of ARMA simulation with respect to sample size, correlation level, and model complexity. The model error defined as a transform of the prediction error is used as measure for the deviation between the true and the estimated model. The results of the study show that the recognition rates of underlying true processes increase with increasing sample sizes and decrease with rising model complexity. Considering large sample sizes, the true underlying processes can be correctly recognised for nearly 80% of the analysed data sets. Additionally, the model errors of first-order AR resp. MA processes converge clearly more rapidly to the corresponding asymptotical values than those of high-order ARMA processes.

  16. True or false? Memory is differentially affected by stress-induced cortisol elevations and sympathetic activity at consolidation and retrieval.

    PubMed

    Smeets, Tom; Otgaar, Henry; Candel, Ingrid; Wolf, Oliver T

    2008-11-01

    Adrenal stress hormones released in response to acute stress may yield memory-enhancing effects when released post-learning and impairing effects at memory retrieval, especially for emotional memory material. However, so far these differential effects of stress hormones on the various memory phases for neutral and emotional memory material have not been demonstrated within one experiment. This study investigated whether, in line with their effects on true memory, stress and stress-induced adrenal stress hormones affect the encoding, consolidation, and retrieval of emotional and neutral false memories. Participants (N=90) were exposed to a stressor before encoding, during consolidation, before retrieval, or were not stressed and then were subjected to neutral and emotional versions of the Deese-Roediger-McDermott word list learning paradigm. Twenty-four hours later, recall of presented words (true recall) and non-presented critical lure words (false recall) was assessed. Results show that stress exposure resulted in superior true memory performance in the consolidation stress group and reduced true memory performance in the retrieval stress group compared to the other groups, predominantly for emotional words. These memory-enhancing and memory-impairing effects were strongly related to stress-induced cortisol and sympathetic activity measured via salivary alpha-amylase levels. Neutral and emotional false recall, on the other hand, was neither affected by stress exposure, nor related to cortisol and sympathetic activity following stress. These results demonstrate the importance of stress-induced hormone-related activity in enhancing memory consolidation and in impairing memory retrieval, in particular for emotional memory material.

  17. Improved Taxation Rate for Bin Packing Games

    NASA Astrophysics Data System (ADS)

    Kern, Walter; Qiu, Xian

    A cooperative bin packing game is a N-person game, where the player set N consists of k bins of capacity 1 each and n items of sizes a 1, ⋯ ,a n . The value of a coalition of players is defined to be the maximum total size of items in the coalition that can be packed into the bins of the coalition. We present an alternative proof for the non-emptiness of the 1/3-core for all bin packing games and show how to improve this bound ɛ= 1/3 (slightly). We conjecture that the true best possible value is ɛ= 1/7.

  18. Long-term support and personal adjustment of adolescent and older mothers.

    PubMed

    Schilmoeller, G L; Baranowski, M D; Higgins, B S

    1991-01-01

    Adolescent and older mothers reported the size and quality of social networks and perceptions of family support at 1, 6, and 12 months postpartum. Maternal behavior, general life satisfaction, and parental satisfaction were assessed at 12 months. No significant differences were found in the size of social networks and quality of interactions within those networks, though older mothers had significantly higher scores on perceived family support than did adolescent mothers. Perceived family support and quality of interactions within the social network generally were associated positively with maternal behavior, life satisfaction, and parental satisfaction. This was true in more cases for the adolescent than for older mothers.

  19. Violent video game effects remain a societal concern: Reply to Hilgard, Engelhardt, and Rouder (2017).

    PubMed

    Kepes, Sven; Bushman, Brad J; Anderson, Craig A

    2017-07-01

    A large meta-analysis by Anderson et al. (2010) found that violent video games increased aggressive thoughts, angry feelings, physiological arousal, and aggressive behavior and decreased empathic feelings and helping behavior. Hilgard, Engelhardt, and Rouder (2017) reanalyzed the data of Anderson et al. (2010) using newer publication bias methods (i.e., precision-effect test, precision-effect estimate with standard error, p-uniform, p-curve). Based on their reanalysis, Hilgard, Engelhardt, and Rouder concluded that experimental studies examining the effect of violent video games on aggressive affect and aggressive behavior may be contaminated by publication bias, and these effects are very small when corrected for publication bias. However, the newer methods Hilgard, Engelhardt, and Rouder used may not be the most appropriate. Because publication bias is a potential a problem in any scientific domain, we used a comprehensive sensitivity analysis battery to examine the influence of publication bias and outliers on the experimental effects reported by Anderson et al. We used best meta-analytic practices and the triangulation approach to locate the likely position of the true mean effect size estimates. Using this methodological approach, we found that the combined adverse effects of outliers and publication bias was less severe than what Hilgard, Engelhardt, and Rouder found for publication bias alone. Moreover, the obtained mean effects using recommended methods and practices were not very small in size. The results of the methods used by Hilgard, Engelhardt, and Rouder tended to not converge well with the results of the methods we used, indicating potentially poor performance. We therefore conclude that violent video game effects should remain a societal concern. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Effects of Different Types of True-False Questions on Memory Awareness and Long-Term Retention

    ERIC Educational Resources Information Center

    Schaap, Lydia; Verkoeijen, Peter; Schmidt, Henk

    2014-01-01

    This study investigated the effects of two different true-false questions on memory awareness and long-term retention of knowledge. Participants took four subsequent knowledge tests on curriculum learning material that they studied at different retention intervals prior to the start of this study (i.e. prior to the first test). At the first and…

  1. Student Learning through Service Learning: Effects on Academic Development, Civic Responsibility, Interpersonal Skills and Practical Skills

    ERIC Educational Resources Information Center

    Hébert, Ali; Hauf, Petra

    2015-01-01

    Although anecdotal evidence and research alike espouse the benefits of service learning, some researchers have suggested that more rigorous testing is required in order to determine its true effect on students. This is particularly true in the case of academic development, which has been inconsistently linked to service learning. It has been…

  2. New approach to calculate the true-coincidence effect of HpGe detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alnour, I. A., E-mail: aaibrahim3@live.utm.my, E-mail: ibrahim.elnour@yahoo.com; Wagiran, H.; Ibrahim, N.

    The corrections for true-coincidence effects in HpGe detector are important, especially at low source-to-detector distances. This work established an approach to calculate the true-coincidence effects experimentally for HpGe detectors of type Canberra GC3018 and Ortec GEM25-76-XLB-C, which are in operation at neutron activation analysis lab in Malaysian Nuclear Agency (NM). The correction for true-coincidence effects was performed close to detector at distances 2 and 5 cm using {sup 57}Co, {sup 60}Co, {sup 133}Ba and {sup 137}Cs as standard point sources. The correction factors were ranged between 0.93-1.10 at 2 cm and 0.97-1.00 at 5 cm for Canberra HpGe detector; whereas for Ortec HpGemore » detector ranged between 0.92-1.13 and 0.95-100 at 2 and 5 cm respectively. The change in efficiency calibration curve of the detector at 2 and 5 cm after correction was found to be less than 1%. Moreover, the polynomial parameters functions were simulated through a computer program, MATLAB in order to find an accurate fit to the experimental data points.« less

  3. Experimental Characterization and Micromechanical Modeling of Woven Carbon/Copper Composites

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Pauly, Christopher C.; Pindera, Marek-Jerzy

    1997-01-01

    The results of an extensive experimental characterization and a preliminary analytical modeling effort for the elastoplastic mechanical behavior of 8-harness satin weave carbon/copper (C/Cu) composites are presented. Previous experimental and modeling investigations of woven composites are discussed, as is the evolution of, and motivation for, the continuing research on C/Cu composites. Experimental results of monotonic and cyclic tension, compression, and Iosipescu shear tests, and combined tension-compression tests, are presented. With regard to the test results, emphasis is placed on the effect of strain gauge size and placement, the effect of alloying the copper matrix to improve fiber-matrix bonding, yield surface characterization, and failure mechanisms. The analytical methodology used in this investigation consists of an extension of the three-dimensional generalized method of cells (GMC-3D) micromechanics model, developed by Aboudi (1994), to include inhomogeneity and plasticity effects on the subcell level. The extension of the model allows prediction of the elastoplastic mechanical response of woven composites, as represented by a true repeating unit cell for the woven composite. The model is used to examine the effects of refining the representative geometry of the composite, altering the composite overall fiber volume fraction, changing the size and placement of the strain gauge with respect to the composite's reinforcement weave, and including porosity within the infiltrated fiber yarns on the in-plane elastoplastic tensile, compressive, and shear response of 8-harness satin C/Cu. The model predictions are also compared with the appropriate monotonic experimental results.

  4. Improved identification of the solution space of aerosol microphysical properties derived from the inversion of profiles of lidar optical data, part 1: theory.

    PubMed

    Kolgotin, Alexei; Müller, Detlef; Chemyakin, Eduard; Romanov, Anton

    2016-12-01

    Multiwavelength Raman/high spectral resolution lidars that measure backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm can be used for the retrieval of particle microphysical parameters, such as effective and mean radius, number, surface-area and volume concentrations, and complex refractive index, from inversion algorithms. In this study, we carry out a correlation analysis in order to investigate the degree of dependence that may exist between the optical data taken with lidar and the underlying microphysical parameters. We also investigate if the correlation properties identified in our study can be used as a priori or a posteriori constraints for our inversion scheme so that the inversion results can be improved. We made the simplifying assumption of error-free optical data in order to find out what correlations exist in the best case situation. Clearly, for practical applications, erroneous data need to be considered too. On the basis of simulations with synthetic optical data, we find the following results, which hold true for arbitrary particle size distributions, i.e., regardless of the modality or the shape of the size distribution function: surface-area concentrations and extinction coefficients are linearly correlated with a correlation coefficient above 0.99. We also find a correlation coefficient above 0.99 for the extinction coefficient versus (1) the ratio of the volume concentration to effective radius and (2) the product of the number concentration times the sum of the squares of the mean radius and standard deviation of the investigated particle size distributions. Besides that, we find that for particles of any mode fraction of the particle size distribution, the complex refractive index is uniquely defined by extinction- and backscatter-related Ångström exponents, lidar ratios at two wavelengths, and an effective radius.

  5. Energy Content Estimation by Collegians for Portion Standardized Foods Frequently Consumed in Korea

    PubMed Central

    Kim, Jin; Lee, Hee Jung; Lee, Hyun Jung; Lee, Sun Ha; Yun, Jee-Young; Choi, Mi-Kyeong

    2014-01-01

    The purpose of this study is to estimate Korean collegians' knowledge of energy content in the standard portion size of foods frequently consumed in Korea and to investigate the differences in knowledge between gender groups. A total of 600 collegians participated in this study. Participants' knowledge was assessed based on their estimation on the energy content of 30 selected food items with their actual-size photo images. Standard portion size of food was based on 2010 Korean Dietary Reference Intakes, and the percentage of participants who accurately estimated (that is, within 20% of the true value) the energy content of the standard portion size was calculated for each food item. The food for which the most participants provided the accurate estimation was ramyun (instant noodles) (67.7%), followed by cooked rice (57.8%). The proportion of students who overestimated the energy content was highest for vegetables (68.8%) and beverages (68.1%). The proportion of students who underestimated the energy content was highest for grains and starches (42.0%) and fruits (37.1%). Female students were more likely to check energy content of foods that they consumed than male students. From these results, it was concluded that the knowledge on food energy content was poor among collegians, with some gender difference. Therefore, in the future, nutrition education programs should give greater attention to improving knowledge on calorie content and to helping them apply this knowledge in order to develop effective dietary plans. PMID:24527417

  6. Energy content estimation by collegians for portion standardized foods frequently consumed in Korea.

    PubMed

    Kim, Jin; Lee, Hee Jung; Lee, Hyun Jung; Lee, Sun Ha; Yun, Jee-Young; Choi, Mi-Kyeong; Kim, Mi-Hyun

    2014-01-01

    The purpose of this study is to estimate Korean collegians' knowledge of energy content in the standard portion size of foods frequently consumed in Korea and to investigate the differences in knowledge between gender groups. A total of 600 collegians participated in this study. Participants' knowledge was assessed based on their estimation on the energy content of 30 selected food items with their actual-size photo images. Standard portion size of food was based on 2010 Korean Dietary Reference Intakes, and the percentage of participants who accurately estimated (that is, within 20% of the true value) the energy content of the standard portion size was calculated for each food item. The food for which the most participants provided the accurate estimation was ramyun (instant noodles) (67.7%), followed by cooked rice (57.8%). The proportion of students who overestimated the energy content was highest for vegetables (68.8%) and beverages (68.1%). The proportion of students who underestimated the energy content was highest for grains and starches (42.0%) and fruits (37.1%). Female students were more likely to check energy content of foods that they consumed than male students. From these results, it was concluded that the knowledge on food energy content was poor among collegians, with some gender difference. Therefore, in the future, nutrition education programs should give greater attention to improving knowledge on calorie content and to helping them apply this knowledge in order to develop effective dietary plans.

  7. Complex Population Dynamics and the Coalescent Under Neutrality

    PubMed Central

    Volz, Erik M.

    2012-01-01

    Estimates of the coalescent effective population size Ne can be poorly correlated with the true population size. The relationship between Ne and the population size is sensitive to the way in which birth and death rates vary over time. The problem of inference is exacerbated when the mechanisms underlying population dynamics are complex and depend on many parameters. In instances where nonparametric estimators of Ne such as the skyline struggle to reproduce the correct demographic history, model-based estimators that can draw on prior information about population size and growth rates may be more efficient. A coalescent model is developed for a large class of populations such that the demographic history is described by a deterministic nonlinear dynamical system of arbitrary dimension. This class of demographic model differs from those typically used in population genetics. Birth and death rates are not fixed, and no assumptions are made regarding the fraction of the population sampled. Furthermore, the population may be structured in such a way that gene copies reproduce both within and across demes. For this large class of models, it is shown how to derive the rate of coalescence, as well as the likelihood of a gene genealogy with heterochronous sampling and labeled taxa, and how to simulate a coalescent tree conditional on a complex demographic history. This theoretical framework encapsulates many of the models used by ecologists and epidemiologists and should facilitate the integration of population genetics with the study of mathematical population dynamics. PMID:22042576

  8. Dry paths effectively reduce road mortality of small and medium-sized terrestrial vertebrates.

    PubMed

    Niemi, Milla; Jääskeläinen, Niina C; Nummi, Petri; Mäkelä, Tiina; Norrdahl, Kai

    2014-11-01

    Wildlife passages are widely used mitigation measures designed to reduce the adverse impacts of roads on animals. We investigated whether road kills of small and medium-sized terrestrial vertebrates can be reduced by constructing dry paths adjacent to streams that pass under road bridges. The study was carried out in southern Finland during the summer of 2008. We selected ten road bridges with dry paths and ten bridges without them, and an individual dry land reference site for each study bridge on the basis of landscape and traffic features. A total of 307 dead terrestrial vertebrates were identified during the ten-week study period. The presence of dry paths decreased the amount of road-killed terrestrial vertebrates (Poisson GLMM; p < 0.001). That was true also when considering amphibians alone (p < 0.001). The evidence on road-kills on mammals was not such clear. In the mammal model, a lack of dry paths increased the amount of carcasses (p = 0.001) whereas the number of casualties at dry path bridges was comparable with dry land reference sites. A direct comparison of the dead ratios suggests an average efficiency of 79% for the dry paths. When considering amphibians and mammals alone, the computed effectiveness was 88 and 70%, respectively. Our results demonstrate that dry paths under road bridges can effectively reduce road-kills of small and medium-sized terrestrial vertebrates, even without guiding fences. Dry paths seemed to especially benefit amphibians which are a threatened species group worldwide and known to suffer high traffic mortality. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Effect of the Axial Spacing between Vanes and Blades on a Transonic Gas Turbine Performance and Blade Loading

    NASA Astrophysics Data System (ADS)

    Chang, Dongil; Tavoularis, Stavros

    2013-03-01

    Unsteady numerical simulations have been conducted to investigate the effect of axial spacing between the stator vanes and the rotor blades on the performance of a transonic, single-stage, high-pressure, axial turbine. Three cases were considered, the normal case, which is based on the geometry of a commercial jet engine and has an axial spacing at 50% blade span equal to 42% of the vane axial chord, as well as two other cases with axial spacings equal to 31 and 52% vane axial chords, respectively. Present interest has focused on the effect of axial gap size on the instantaneous and time-averaged flows as well as on the blade loading and the turbine performance. Decreasing the gap size reduced the pressure and increased the Mach number in the core flows in the gap region. However, the flows near the two endwalls did not follow monotonic trends with the gap size change; instead, the Mach numbers for both the small gap and the large gap cases were lower than that for the normal case. This Mach number decrease was attributed to increased turbulence due to the increased wake strength for the small gap case and an increased wake width for the large gap case. In all considered cases, large pressure fluctuations were observed in the front region of the blade suction side. These pressure fluctuations were strongest for the smaller spacing. The turbine efficiencies of the cases with the larger and smaller spacings were essentially the same, but both were lower than that of the normal case. The stator loss for the smaller spacing case was lower than the one for the larger spacing case, whereas the opposite was true for the rotor loss.

  10. Detecting small-study effects and funnel plot asymmetry in meta-analysis of survival data: A comparison of new and existing tests.

    PubMed

    Debray, Thomas P A; Moons, Karel G M; Riley, Richard D

    2018-03-01

    Small-study effects are a common threat in systematic reviews and may indicate publication bias. Their existence is often verified by visual inspection of the funnel plot. Formal tests to assess the presence of funnel plot asymmetry typically estimate the association between the reported effect size and their standard error, the total sample size, or the inverse of the total sample size. In this paper, we demonstrate that the application of these tests may be less appropriate in meta-analysis of survival data, where censoring influences statistical significance of the hazard ratio. We subsequently propose 2 new tests that are based on the total number of observed events and adopt a multiplicative variance component. We compare the performance of the various funnel plot asymmetry tests in an extensive simulation study where we varied the true hazard ratio (0.5 to 1), the number of published trials (N=10 to 100), the degree of censoring within trials (0% to 90%), and the mechanism leading to participant dropout (noninformative versus informative). Results demonstrate that previous well-known tests for detecting funnel plot asymmetry suffer from low power or excessive type-I error rates in meta-analysis of survival data, particularly when trials are affected by participant dropout. Because our novel test (adopting estimates of the asymptotic precision as study weights) yields reasonable power and maintains appropriate type-I error rates, we recommend its use to evaluate funnel plot asymmetry in meta-analysis of survival data. The use of funnel plot asymmetry tests should, however, be avoided when there are few trials available for any meta-analysis. © 2017 The Authors. Research Synthesis Methods Published by John Wiley & Sons, Ltd.

  11. The Effect of Virus-Blocking Wolbachia on Male Competitiveness of the Dengue Vector Mosquito, Aedes aegypti

    PubMed Central

    Segoli, Michal; Hoffmann, Ary A.; Lloyd, Jane; Omodei, Gavin J.; Ritchie, Scott A.

    2014-01-01

    Background The bacterial endosymbiont Wolbachia blocks the transmission of dengue virus by its vector mosquito Aedes aegypti, and is currently being evaluated for control of dengue outbreaks. Wolbachia induces cytoplasmic incompatibility (CI) that results in the developmental failure of offspring in the cross between Wolbachia-infected males and uninfected females. This increases the relative success of infected females in the population, thereby enhancing the spread of the beneficial bacterium. However, Wolbachia spread via CI will only be feasible if infected males are sufficiently competitive in obtaining a mate under field conditions. We tested the effect of Wolbachia on the competitiveness of A. aegypti males under semi-field conditions. Methodology/Principal Findings In a series of experiments we exposed uninfected females to Wolbachia-infected and uninfected males simultaneously. We scored the competitiveness of infected males according to the proportion of females producing non-viable eggs due to incompatibility. We found that infected males were equally successful to uninfected males in securing a mate within experimental tents and semi-field cages. This was true for males infected by the benign wMel Wolbachia strain, but also for males infected by the virulent wMelPop (popcorn) strain. By manipulating male size we found that larger males had a higher success than smaller underfed males in the semi-field cages, regardless of their infection status. Conclusions/Significance The results indicate that Wolbachia infection does not reduce the competitiveness of A. aegypti males. Moreover, the body size effect suggests a potential advantage for lab-reared Wolbachia-males during a field release episode, due to their better nutrition and larger size. This may promote Wolbachia spread via CI in wild mosquito populations and underscores its potential use for disease control. PMID:25502564

  12. The effect of virus-blocking Wolbachia on male competitiveness of the dengue vector mosquito, Aedes aegypti.

    PubMed

    Segoli, Michal; Hoffmann, Ary A; Lloyd, Jane; Omodei, Gavin J; Ritchie, Scott A

    2014-12-01

    The bacterial endosymbiont Wolbachia blocks the transmission of dengue virus by its vector mosquito Aedes aegypti, and is currently being evaluated for control of dengue outbreaks. Wolbachia induces cytoplasmic incompatibility (CI) that results in the developmental failure of offspring in the cross between Wolbachia-infected males and uninfected females. This increases the relative success of infected females in the population, thereby enhancing the spread of the beneficial bacterium. However, Wolbachia spread via CI will only be feasible if infected males are sufficiently competitive in obtaining a mate under field conditions. We tested the effect of Wolbachia on the competitiveness of A. aegypti males under semi-field conditions. In a series of experiments we exposed uninfected females to Wolbachia-infected and uninfected males simultaneously. We scored the competitiveness of infected males according to the proportion of females producing non-viable eggs due to incompatibility. We found that infected males were equally successful to uninfected males in securing a mate within experimental tents and semi-field cages. This was true for males infected by the benign wMel Wolbachia strain, but also for males infected by the virulent wMelPop (popcorn) strain. By manipulating male size we found that larger males had a higher success than smaller underfed males in the semi-field cages, regardless of their infection status. The results indicate that Wolbachia infection does not reduce the competitiveness of A. aegypti males. Moreover, the body size effect suggests a potential advantage for lab-reared Wolbachia-males during a field release episode, due to their better nutrition and larger size. This may promote Wolbachia spread via CI in wild mosquito populations and underscores its potential use for disease control.

  13. Assessment of representational competence in kinematics

    NASA Astrophysics Data System (ADS)

    Klein, P.; Müller, A.; Kuhn, J.

    2017-06-01

    A two-tier instrument for representational competence in the field of kinematics (KiRC) is presented, designed for a standard (1st year) calculus-based introductory mechanics course. It comprises 11 multiple choice (MC) and 7 multiple true-false (MTF) questions involving multiple representational formats, such as graphs, pictures, and formal (mathematical) expressions (1st tier). Furthermore, students express their answer confidence for selected items, providing additional information (2nd tier). Measurement characteristics of KiRC were assessed in a validation sample (pre- and post-test, N =83 and N =46 , respectively), including usefulness for measuring learning gain. Validity is checked by interviews and by benchmarking KiRC against related measures. Values for item difficulty, discrimination, and consistency are in the desired ranges; in particular, a good reliability was obtained (KR 20 =0.86 ). Confidence intervals were computed and a replication study yielded values within the latter. For practical and research purposes, KiRC as a diagnostic tool goes beyond related extant instruments both for the representational formats (e.g., mathematical expressions) and for the scope of content covered (e.g., choice of coordinate systems). Together with the satisfactory psychometric properties it appears a versatile and reliable tool for assessing students' representational competency in kinematics (and of its potential change). Confidence judgments add further information to the diagnostic potential of the test, in particular for representational misconceptions. Moreover, we present an analytic result for the question—arising from guessing correction or educational considerations—of how the total effect size (Cohen's d ) varies upon combination of two test components with known individual effect sizes, and then discuss the results in the case of KiRC (MC and MTF combination). The introduced method of test combination analysis can be applied to any test comprising two components for the purpose of finding effect size ranges.

  14. Thine Own Self: True Self-Concept Accessibility and Meaning in Life

    PubMed Central

    Schlegel, Rebecca J.; Hicks, Joshua A.; Arndt, Jamie; King, Laura A.

    2016-01-01

    A number of philosophical and psychological theories suggest the true self is an important contributor to well-being. The present research examined whether the cognitive accessibility of the true self-concept would predict the experience of meaning in life. To ensure that any observed effects were due to the true self-concept rather than the self-concept more generally, we utilized actual self-concept accessibility as a control variable in all studies. True and actual self-concepts were defined as including those traits which are enacted around close others versus most others (Studies 1 through 3) or as traits that refer to “who you really are” vs. “who you are during most of your activities” (Studies 4 and 5), respectively. Studies 1 and 2 showed that individual differences in true self-concept accessibility, but not differences in actual self-concept accessibility, predicted meaning in life. Study 3 showed that priming traits related to the true self led to enhanced meaning in life. Studies 4 and 5 provided correlational and experimental support for the role of true self-concept accessibility in meaning in life, even when traits were defined without reference to social relationships and when state self-esteem and self-reported authenticity were controlled. Implications for the study of the true self-concept and authenticity are discussed. PMID:19159144

  15. Recovering the negative mode for type B Coleman-de Luccia instantons

    NASA Astrophysics Data System (ADS)

    Yang, I.-Sheng

    2013-04-01

    The usual (type A) thin-wall Coleman—de Luccia instanton is made by a bigger-than-half sphere of the false vacuum and a smaller-than-half sphere of the true vacuum. It has the standard O(4) symmetric negative mode associated with changing the size of the true vacuum region. On the other hand, the type B instanton, made by two smaller-than-half spheres, was believed to have lost this negative mode. We argue that such a belief is misguided due to an overrestriction on the Euclidean path integral. We introduce the idea of a “purely geometric junction” to visualize why such a restriction could be removed, and then we explicitly construct this negative mode. We also show that type B and type A instantons have the same thermal interpretation for mediating tunnelings.

  16. Is Coefficient Alpha Robust to Non-Normal Data?

    PubMed Central

    Sheng, Yanyan; Sheng, Zhaohui

    2011-01-01

    Coefficient alpha has been a widely used measure by which internal consistency reliability is assessed. In addition to essential tau-equivalence and uncorrelated errors, normality has been noted as another important assumption for alpha. Earlier work on evaluating this assumption considered either exclusively non-normal error score distributions, or limited conditions. In view of this and the availability of advanced methods for generating univariate non-normal data, Monte Carlo simulations were conducted to show that non-normal distributions for true or error scores do create problems for using alpha to estimate the internal consistency reliability. The sample coefficient alpha is affected by leptokurtic true score distributions, or skewed and/or kurtotic error score distributions. Increased sample sizes, not test lengths, help improve the accuracy, bias, or precision of using it with non-normal data. PMID:22363306

  17. True Aneurysm of the Inferior Thyroid Artery: A Case Report and Literature Review.

    PubMed

    Venturini, Luigi; Sapienza, Paolo; Grande, Raffaele; Scarano Catanzaro, Valerio; Fanelli, Fabrizio; di Marzo, Luca

    2017-04-01

    Aneurysms of the inferior thyroid artery (ITA) are extremely rare and potentially determine severe sequelae. We report a case of true ITA aneurysm in a 45-year-old Caucasian woman treated with endovascular embolization; postoperative course was uneventful and, at 6-month follow-up, the aneurysm is completely thrombized. A systematic review of the literature has been also performed to identify the epidemiologic and clinical characteristics and diagnostic and operative options of this disease. Size alone is not able to predict the fate of the aneurysm and an aggressive treatment seems to be justified because of the high risk of complications in case of rupture. In an emergency setting, the endovascular procedures associated to hematoma evacuation or open surgery should be rapidly performed to save patient life. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Modeling the TrueBeam linac using a CAD to Geant4 geometry implementation: Dose and IAEA-compliant phase space calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Constantin, Magdalena; Perl, Joseph; LoSasso, Tom

    2011-07-15

    Purpose: To create an accurate 6 MV Monte Carlo simulation phase space for the Varian TrueBeam treatment head geometry imported from cad (computer aided design) without adjusting the input electron phase space parameters. Methods: geant4 v4.9.2.p01 was employed to simulate the 6 MV beam treatment head geometry of the Varian TrueBeam linac. The electron tracks in the linear accelerator were simulated with Parmela, and the obtained electron phase space was used as an input to the Monte Carlo beam transport and dose calculations. The geometry components are tessellated solids included in geant4 as gdml (generalized dynamic markup language) files obtainedmore » via STEP (standard for the exchange of product) export from Pro/Engineering, followed by STEP import in Fastrad, a STEP-gdml converter. The linac has a compact treatment head and the small space between the shielding collimator and the divergent arc of the upper jaws forbids the implementation of a plane for storing the phase space. Instead, an IAEA (International Atomic Energy Agency) compliant phase space writer was implemented on a cylindrical surface. The simulation was run in parallel on a 1200 node Linux cluster. The 6 MV dose calculations were performed for field sizes varying from 4 x 4 to 40 x 40 cm{sup 2}. The voxel size for the 60x60x40 cm{sup 3} water phantom was 4x4x4 mm{sup 3}. For the 10x10 cm{sup 2} field, surface buildup calculations were performed using 4x4x2 mm{sup 3} voxels within 20 mm of the surface. Results: For the depth dose curves, 98% of the calculated data points agree within 2% with the experimental measurements for depths between 2 and 40 cm. For depths between 5 and 30 cm, agreement within 1% is obtained for 99% (4x4), 95% (10x10), 94% (20x20 and 30x30), and 89% (40x40) of the data points, respectively. In the buildup region, the agreement is within 2%, except at 1 mm depth where the deviation is 5% for the 10x10 cm{sup 2} open field. For the lateral dose profiles, within the field size for fields up to 30x30 cm{sup 2}, the agreement is within 2% for depths up to 10 cm. At 20 cm depth, the in-field maximum dose difference for the 30x30 cm{sup 2} open field is within 4%, while the smaller field sizes agree within 2%. Outside the field size, agreement within 1% of the maximum dose difference is obtained for all fields. The calculated output factors varied from 0.938{+-}0.015 for the 4x4 cm{sup 2} field to 1.088{+-}0.024 for the 40x40 cm{sup 2} field. Their agreement with the experimental output factors is within 1%. Conclusions: The authors have validated a geant4 simulated IAEA-compliant phase space of the TrueBeam linac for the 6 MV beam obtained using a high accuracy geometry implementation from cad. These files are publicly available and can be used for further research.« less

  19. Exercise is an effective treatment modality for reducing cancer-related fatigue and improving physical capacity in cancer patients and survivors: a meta-analysis.

    PubMed

    McMillan, Elliott M; Newhouse, Ian J

    2011-12-01

    The use of exercise interventions to manage cancer-related fatigue (CRF) is a rapidly developing field of study. However, results are inconsistent and difficult to interpret across the literature, making it difficult to draw accurate conclusions regarding the true effectiveness of exercise interventions for CRF management. The aims of this study were to apply a meta-analysis to quantitatively assess the effects of exercise intervention strategies on CRF, and to elucidate appropriate exercise prescription guidelines. A systematic search of electronic databases and relevant journals and articles was conducted. Studies were eligible if subjects were over the age of 18 years, if they had been given a diagnosis of or had been treated for cancer, if exercise was used to treat CRF as a primary or secondary endpoint, and if the effects of the intervention were evaluated quantitatively and presented adequate statistical data for analysis. A total of 16 studies, representing 1426 participants (exercise, 759; control, 667) were included in a meta-analysis using a fixed-effects model. The standardized mean difference effect size (SMD) was used to test the effect of exercise on CRF between experimental and control groups. The results indicate a small but significant effect size in favour of the use of exercise interventions for reducing CRF (SMD 0.26, p < 0.001). Furthermore, aerobic exercise programs caused a significant reduction in CRF (SMD 0.21, p < 0.001) and overall, exercise was able to significantly improve aerobic and musculoskeletal fitness compared with control groups (p < 0.01). Further investigation is still required to determine the effects of exercise on potential underlying mechanisms related to the pathophysiology of CRF.

  20. Comparing particle-size distributions in modern and ancient sand-bed rivers

    NASA Astrophysics Data System (ADS)

    Hajek, E. A.; Lynds, R. M.; Huzurbazar, S. V.

    2011-12-01

    Particle-size distributions yield valuable insight into processes controlling sediment supply, transport, and deposition in sedimentary systems. This is especially true in ancient deposits, where effects of changing boundary conditions and autogenic processes may be detected from deposited sediment. In order to improve interpretations in ancient deposits and constrain uncertainty associated with new methods for paleomorphodynamic reconstructions in ancient fluvial systems, we compare particle-size distributions in three active sand-bed rivers in central Nebraska (USA) to grain-size distributions from ancient sandy fluvial deposits. Within the modern rivers studied, particle-size distributions of active-layer, suspended-load, and slackwater deposits show consistent relationships despite some morphological and sediment-supply differences between the rivers. In particular, there is substantial and consistent overlap between bed-material and suspended-load distributions, and the coarsest material found in slackwater deposits is comparable to the coarse fraction of suspended-sediment samples. Proxy bed-load and slackwater-deposit samples from the Kayenta Formation (Lower Jurassic, Utah/Colorado, USA) show overlap similar to that seen in the modern rivers, suggesting that these deposits may be sampled for paleomorphodynamic reconstructions, including paleoslope estimation. We also compare grain-size distributions of channel, floodplain, and proximal-overbank deposits in the Willwood (Paleocene/Eocene, Bighorn Basin, Wyoming, USA), Wasatch (Paleocene/Eocene, Piceance Creek Basin, Colorado, USA), and Ferris (Cretaceous/Paleocene, Hanna Basin, Wyoming, USA) formations. Grain-size characteristics in these deposits reflect how suspended- and bed-load sediment is distributed across the floodplain during channel avulsion events. In order to constrain uncertainty inherent in such estimates, we evaluate uncertainty associated with sample collection, preparation, analytical particle-size analysis, and statistical characterization in both modern and ancient settings. We consider potential error contributions and evaluate the degree to which this uncertainty might be significant in modern sediment-transport studies and ancient paleomorphodynamic reconstructions.

  1. On the validity of within-nuclear-family genetic association analysis in samples of extended families.

    PubMed

    Bureau, Alexandre; Duchesne, Thierry

    2015-12-01

    Splitting extended families into their component nuclear families to apply a genetic association method designed for nuclear families is a widespread practice in familial genetic studies. Dependence among genotypes and phenotypes of nuclear families from the same extended family arises because of genetic linkage of the tested marker with a risk variant or because of familial specificity of genetic effects due to gene-environment interaction. This raises concerns about the validity of inference conducted under the assumption of independence of the nuclear families. We indeed prove theoretically that, in a conditional logistic regression analysis applicable to disease cases and their genotyped parents, the naive model-based estimator of the variance of the coefficient estimates underestimates the true variance. However, simulations with realistic effect sizes of risk variants and variation of this effect from family to family reveal that the underestimation is negligible. The simulations also show the greater efficiency of the model-based variance estimator compared to a robust empirical estimator. Our recommendation is therefore, to use the model-based estimator of variance for inference on effects of genetic variants.

  2. A rank test for bivariate time-to-event outcomes when one event is a surrogate

    PubMed Central

    Shaw, Pamela A.; Fay, Michael P.

    2016-01-01

    In many clinical settings, improving patient survival is of interest but a practical surrogate, such as time to disease progression, is instead used as a clinical trial’s primary endpoint. A time-to-first endpoint (e.g. death or disease progression) is commonly analyzed but may not be adequate to summarize patient outcomes if a subsequent event contains important additional information. We consider a surrogate outcome very generally, as one correlated with the true endpoint of interest. Settings of interest include those where the surrogate indicates a beneficial outcome so that the usual time-to-first endpoint of death or surrogate event is nonsensical. We present a new two-sample test for bivariate, interval-censored time-to-event data, where one endpoint is a surrogate for the second, less frequently observed endpoint of true interest. This test examines whether patient groups have equal clinical severity. If the true endpoint rarely occurs, the proposed test acts like a weighted logrank test on the surrogate; if it occurs for most individuals, then our test acts like a weighted logrank test on the true endpoint. If the surrogate is a useful statistical surrogate, our test can have better power than tests based on the surrogate that naively handle the true endpoint. In settings where the surrogate is not valid (treatment affects the surrogate but not the true endpoint), our test incorporates the information regarding the lack of treatment effect from the observed true endpoints and hence is expected to have a dampened treatment effect compared to tests based on the surrogate alone. PMID:27059817

  3. Blinded evaluation of the effects of high definition and magnification on perceived image quality in laryngeal imaging.

    PubMed

    Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M

    2006-02-01

    Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.

  4. Effectiveness of Computer-Aided Detection in Community Mammography Practice

    PubMed Central

    Abraham, Linn; Taplin, Stephen H.; Geller, Berta M.; Carney, Patricia A.; D’Orsi, Carl; Elmore, Joann G.; Barlow, William E.

    2011-01-01

    Background Computer-aided detection (CAD) is applied during screening mammography for millions of US women annually, although it is uncertain whether CAD improves breast cancer detection when used by community radiologists. Methods We investigated the association between CAD use during film-screen screening mammography and specificity, sensitivity, positive predictive value, cancer detection rates, and prognostic characteristics of breast cancers (stage, size, and node involvement). Records from 684 956 women who received more than 1.6 million film-screen mammograms at Breast Cancer Surveillance Consortium facilities in seven states in the United States from 1998 to 2006 were analyzed. We used random-effects logistic regression to estimate associations between CAD and specificity (true-negative examinations among women without breast cancer), sensitivity (true-positive examinations among women with breast cancer diagnosed within 1 year of mammography), and positive predictive value (breast cancer diagnosed after positive mammograms) while adjusting for mammography registry, patient age, time since previous mammography, breast density, use of hormone replacement therapy, and year of examination (1998–2002 vs 2003–2006). All statistical tests were two-sided. Results Of 90 total facilities, 25 (27.8%) adopted CAD and used it for an average of 27.5 study months. In adjusted analyses, CAD use was associated with statistically significantly lower specificity (OR = 0.87, 95% confidence interval [CI] = 0.85 to 0.89, P < .001) and positive predictive value (OR = 0.89, 95% CI = 0.80 to 0.99, P = .03). A non-statistically significant increase in overall sensitivity with CAD (OR = 1.06, 95% CI = 0.84 to 1.33, P = .62) was attributed to increased sensitivity for ductal carcinoma in situ (OR = 1.55, 95% CI = 0.83 to 2.91; P = .17), although sensitivity for invasive cancer was similar with or without CAD (OR = 0.96, 95% CI = 0.75 to 1.24; P = .77). CAD was not associated with higher breast cancer detection rates or more favorable stage, size, or lymph node status of invasive breast cancer. Conclusion CAD use during film-screen screening mammography in the United States is associated with decreased specificity but not with improvement in the detection rate or prognostic characteristics of invasive breast cancer. PMID:21795668

  5. Assessment of predictive performance in incomplete data by combining internal validation and multiple imputation.

    PubMed

    Wahl, Simone; Boulesteix, Anne-Laure; Zierer, Astrid; Thorand, Barbara; van de Wiel, Mark A

    2016-10-26

    Missing values are a frequent issue in human studies. In many situations, multiple imputation (MI) is an appropriate missing data handling strategy, whereby missing values are imputed multiple times, the analysis is performed in every imputed data set, and the obtained estimates are pooled. If the aim is to estimate (added) predictive performance measures, such as (change in) the area under the receiver-operating characteristic curve (AUC), internal validation strategies become desirable in order to correct for optimism. It is not fully understood how internal validation should be combined with multiple imputation. In a comprehensive simulation study and in a real data set based on blood markers as predictors for mortality, we compare three combination strategies: Val-MI, internal validation followed by MI on the training and test parts separately, MI-Val, MI on the full data set followed by internal validation, and MI(-y)-Val, MI on the full data set omitting the outcome followed by internal validation. Different validation strategies, including bootstrap und cross-validation, different (added) performance measures, and various data characteristics are considered, and the strategies are evaluated with regard to bias and mean squared error of the obtained performance estimates. In addition, we elaborate on the number of resamples and imputations to be used, and adopt a strategy for confidence interval construction to incomplete data. Internal validation is essential in order to avoid optimism, with the bootstrap 0.632+ estimate representing a reliable method to correct for optimism. While estimates obtained by MI-Val are optimistically biased, those obtained by MI(-y)-Val tend to be pessimistic in the presence of a true underlying effect. Val-MI provides largely unbiased estimates, with a slight pessimistic bias with increasing true effect size, number of covariates and decreasing sample size. In Val-MI, accuracy of the estimate is more strongly improved by increasing the number of bootstrap draws rather than the number of imputations. With a simple integrated approach, valid confidence intervals for performance estimates can be obtained. When prognostic models are developed on incomplete data, Val-MI represents a valid strategy to obtain estimates of predictive performance measures.

  6. Experimental quantification of the true efficiency of carbon nanotube thin-film thermophones.

    PubMed

    Bouman, Troy M; Barnard, Andrew R; Asgarisabet, Mahsa

    2016-03-01

    Carbon nanotube thermophones can create acoustic waves from 1 Hz to 100 kHz. The thermoacoustic effect that allows for this non-vibrating sound source is naturally inefficient. Prior efforts have not explored their true efficiency (i.e., the ratio of the total acoustic power to the electrical input power). All previous works have used the ratio of sound pressure to input electrical power. A method for true power efficiency measurement is shown using a fully anechoic technique. True efficiency data are presented for three different drive signal processing techniques: standard alternating current (AC), direct current added to alternating current (DCAC), and amplitude modulation of an alternating current (AMAC) signal. These signal processing techniques are needed to limit the frequency doubling non-linear effects inherent to carbon nanotube thermophones. Each type of processing affects the true efficiency differently. Using a 72 W(rms) input signal, the measured efficiency ranges were 4.3 × 10(-6) - 319 × 10(-6), 1.7 × 10(-6) - 308 × 10(-6), and 1.2 × 10(-6) - 228 × 10(-6)% for AC, DCAC, and AMAC, respectively. These data were measured in the frequency range of 100 Hz to 10 kHz. In addition, the effects of these processing techniques relative to sound quality are presented in terms of total harmonic distortion.

  7. [Clinicopathological characterization of true hermaphroditism complicated with seminoma and review of the literature].

    PubMed

    Hua, Xing; Liu, Shao-Jie; Lu, Lin; Li, Chao-Xia; Yu, Li-Na

    2012-08-01

    To study the clinicopathological characteristics and diagnosis of true hermaphroditism complicated with seminoma. We retrospectively analyzed the clinicopathological data of a case of true hermaphroditism complicated with seminoma and reviewed the related literature. The patient was a 42-year-old male, admitted for bilateral lower back pain and discomfort. CT showed a huge mass in the lower middle abdomen. Gross pathological examination revealed a mass of uterine tissue, 7 cm x 2 cm x 6 cm in size, with bilateral oviducts and ovarian tissue. There was a cryptorchidism (4.0 cm x 2.5 cm x 1.5 cm) on the left and a huge tumor (22 cm x9 cm x6 cm) on the right of the uterine tissue. The tumor was completely encapsulated, with some testicular tissue. Microscopically, the tumor tissue was arranged in nests or sheets divided and surrounded by fibrous tissue. The tumor cells were large, with abundant and transparent cytoplasm, deeply stained nuclei, coarse granular chromatins, visible mitosis, and infiltration of a small number of lymphocytes in the stroma. The karyotype was 46, XX. Immunohistochemistry showed that PLAP and CD117 were positive, while the AFP, Vimentin, EMA, S100, CK-LMW, Desmin, CD34 and CD30 were negative, and Ki-67 was 20% positive. A small amount of residual normal testicular tissue was seen in the tumor tissue. True hermaphroditism complicated with seminoma is rare. Histopathological analysis combined with immunohistochemical detection is of great value for its diagnosis and differential diagnosis.

  8. Impact of QTL minor allele frequency on genomic evaluation using real genotype data and simulated phenotypes in Japanese Black cattle.

    PubMed

    Uemoto, Yoshinobu; Sasaki, Shinji; Kojima, Takatoshi; Sugimoto, Yoshikazu; Watanabe, Toshio

    2015-11-19

    Genetic variance that is not captured by single nucleotide polymorphisms (SNPs) is due to imperfect linkage disequilibrium (LD) between SNPs and quantitative trait loci (QTLs), and the extent of LD between SNPs and QTLs depends on different minor allele frequencies (MAF) between them. To evaluate the impact of MAF of QTLs on genomic evaluation, we performed a simulation study using real cattle genotype data. In total, 1368 Japanese Black cattle and 592,034 SNPs (Illumina BovineHD BeadChip) were used. We simulated phenotypes using real genotypes under different scenarios, varying the MAF categories, QTL heritability, number of QTLs, and distribution of QTL effect. After generating true breeding values and phenotypes, QTL heritability was estimated and the prediction accuracy of genomic estimated breeding value (GEBV) was assessed under different SNP densities, prediction models, and population size by a reference-test validation design. The extent of LD between SNPs and QTLs in this population was higher in the QTLs with high MAF than in those with low MAF. The effect of MAF of QTLs depended on the genetic architecture, evaluation strategy, and population size in genomic evaluation. In genetic architecture, genomic evaluation was affected by the MAF of QTLs combined with the QTL heritability and the distribution of QTL effect. The number of QTL was not affected on genomic evaluation if the number of QTL was more than 50. In the evaluation strategy, we showed that different SNP densities and prediction models affect the heritability estimation and genomic prediction and that this depends on the MAF of QTLs. In addition, accurate QTL heritability and GEBV were obtained using denser SNP information and the prediction model accounted for the SNPs with low and high MAFs. In population size, a large sample size is needed to increase the accuracy of GEBV. The MAF of QTL had an impact on heritability estimation and prediction accuracy. Most genetic variance can be captured using denser SNPs and the prediction model accounted for MAF, but a large sample size is needed to increase the accuracy of GEBV under all QTL MAF categories.

  9. Meta-analysis: aerobic exercise for the treatment of anxiety disorders.

    PubMed

    Bartley, Christine A; Hay, Madeleine; Bloch, Michael H

    2013-08-01

    This meta-analysis investigates the efficacy of exercise as a treatment for DSM-IV diagnosed anxiety disorders. We searched PubMED and PsycINFO for randomized, controlled trials comparing the anxiolytic effects of aerobic exercise to other treatment conditions for DSM-IV defined anxiety disorders. Seven trials were included in the final analysis, totaling 407 subjects. The control conditions included non-aerobic exercise, waitlist/placebo, cognitive-behavioral therapy, psychoeducation and meditation. A fixed-effects model was used to calculate the standardized mean difference of change in anxiety rating scale scores of aerobic exercise compared to control conditions. Subgroup analyses were performed to examine the effects of (1) comparison condition; (2) whether comparison condition controlled for time spent exercising and (3) diagnostic indication. Aerobic exercise demonstrated no significant effect for the treatment of anxiety disorders (SMD=0.02 (95%CI: -0.20-0.24), z = 0.2, p = 0.85). There was significant heterogeneity between trials (χ(2) test for heterogeneity = 22.7, df = 6, p = 0.001). The reported effect size of aerobic exercise was highly influenced by the type of control condition. Trials utilizing waitlist/placebo controls and trials that did not control for exercise time reported large effects of aerobic exercise while other trials report no effect of aerobic exercise. Current evidence does not support the use of aerobic exercise as an effective treatment for anxiety disorders as compared to the control conditions. This remains true when controlling for length of exercise sessions and type of anxiety disorder. Future studies evaluating the efficacy of aerobic exercise should employ larger sample sizes and utilize comparison interventions that control for exercise time. Copyright © 2013. Published by Elsevier Inc.

  10. Assessment of two different types of bias affecting the results of outcome-based evaluation in undergraduate medical education.

    PubMed

    Schiekirka, Sarah; Anders, Sven; Raupach, Tobias

    2014-07-21

    Estimating learning outcome from comparative student self-ratings is a reliable and valid method to identify specific strengths and shortcomings in undergraduate medical curricula. However, requiring students to complete two evaluation forms (i.e. one before and one after teaching) might adversely affect response rates. Alternatively, students could be asked to rate their initial performance level retrospectively. This approach might threaten the validity of results due to response shift or effort justification bias. Two consecutive cohorts of medical students enrolled in a six-week cardio-respiratory module were enrolled in this study. In both cohorts, performance gain was estimated for 33 specific learning objectives. In the first cohort, outcomes calculated from ratings provided before (pretest) and after (posttest) teaching were compared to outcomes derived from comparative self-ratings collected after teaching only (thentest and posttest). In the second cohort, only thentests and posttests were used to calculate outcomes, but data collection tools differed with regard to item presentation. In one group, thentest and posttest ratings were obtained sequentially on separate forms while in the other, both ratings were obtained simultaneously for each learning objective. Using thentest ratings to calculate performance gain produced slightly higher values than using true pretest ratings. Direct comparison of then- and posttest ratings also yielded slightly higher performance gain than sequential ratings, but this effect was negligibly small. Given the small effect sizes, using thentests appears to be equivalent to using true pretest ratings. Item presentation in the posttest does not significantly impact on results.

  11. Assessment of two different types of bias affecting the results of outcome-based evaluation in undergraduate medical education

    PubMed Central

    2014-01-01

    Background Estimating learning outcome from comparative student self-ratings is a reliable and valid method to identify specific strengths and shortcomings in undergraduate medical curricula. However, requiring students to complete two evaluation forms (i.e. one before and one after teaching) might adversely affect response rates. Alternatively, students could be asked to rate their initial performance level retrospectively. This approach might threaten the validity of results due to response shift or effort justification bias. Methods Two consecutive cohorts of medical students enrolled in a six-week cardio-respiratory module were enrolled in this study. In both cohorts, performance gain was estimated for 33 specific learning objectives. In the first cohort, outcomes calculated from ratings provided before (pretest) and after (posttest) teaching were compared to outcomes derived from comparative self-ratings collected after teaching only (thentest and posttest). In the second cohort, only thentests and posttests were used to calculate outcomes, but data collection tools differed with regard to item presentation. In one group, thentest and posttest ratings were obtained sequentially on separate forms while in the other, both ratings were obtained simultaneously for each learning objective. Results Using thentest ratings to calculate performance gain produced slightly higher values than using true pretest ratings. Direct comparison of then- and posttest ratings also yielded slightly higher performance gain than sequential ratings, but this effect was negligibly small. Conclusions Given the small effect sizes, using thentests appears to be equivalent to using true pretest ratings. Item presentation in the posttest does not significantly impact on results. PMID:25043503

  12. INVESTIGATION OF PARTIAL VOLUME EFFECT IN DIFFERENT PET/CT SYSTEMS: A COMPARISON OF RESULTS USING THE MADEIRA PHANTOM AND THE NEMA NU-2 2001 PHANTOM.

    PubMed

    Chipiga, L; Sydoff, M; Zvonova, I; Bernhardsson, C

    2016-06-01

    Positron emission tomography combined with computed tomography (PET/CT) is a quantitative technique used for diagnosing various diseases and for monitoring treatment response for different types of tumours. However, the accuracy of the data is limited by the spatial resolution of the system. In addition, the so-called partial volume effect (PVE) causes a blurring of image structures, which in turn may cause an underestimation of activity of a structure with high-activity content. In this study, a new phantom, MADEIRA (Minimising Activity and Dose with Enhanced Image quality by Radiopharmaceutical Administrations) for activity quantification in PET and single photon emission computed tomography (SPECT) was used to investigate the influence on the PVE by lesion size and tumour-to-background activity concentration ratio (TBR) in four different PET/CT systems. These measurements were compared with data from measurements with the NEMA NU-2 2001 phantom. The results with the MADEIRA phantom showed that the activity concentration (AC) values were closest to the true values at low ratios of TBR (<10) and reduced to 50 % of the actual AC values at high TBR (30-35). For all scanners, recovery of true values became closer to 1 with an increasing diameter of the lesion. The MADEIRA phantom showed good agreement with the results obtained from measurements with the NEMA NU-2 2001 phantom but allows for a wider range of possibilities in measuring image quality parameters. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Sentences with core knowledge violations increase the size of N400 among paranormal believers.

    PubMed

    Lindeman, Marjaana; Cederström, Sebastian; Simola, Petteri; Simula, Anni; Ollikainen, Sara; Riekki, Tapani

    2008-01-01

    A major problem in research on paranormal beliefs is that the concept of "paranormality" remains to be adequately defined. The aim of this study was to empirically justify the following definition: paranormal beliefs are beliefs in physical, biological, or psychological phenomena that contain core ontological attributes of one of the other two categories [e.g., a stone (physical) having thoughts (psychological)]. We hypothesized that individuals who believe in paranormal phenomena are slower in understanding whether sentences with core knowledge violations are literally true than skeptics, and that this difference would be reflected by a more negative N400. Ten believers and 10 skeptics (six men, age range 23-49) participated in the study. Event-related potentials (N400) were recorded as the participants read 210 three-word Finnish sentences, of which 70 were normal ("The house has a history"), 70 were anomalies ("The house writes its history") and 70 included violations of core knowledge ("The house knows its history"). The participants were presented with a question that contextualized the sentences: "Is this sentence literally true?" While the N400 effects were similar for normal and anomalous sentences among the believers and the skeptics, a more negative N400 effect was found among the believers than among the skeptics for sentences with core knowledge violations. The results support the new definition of "paranormality", because participants who believed in paranormal phenomena appeared to find it more difficult to construct a reasonable interpretation of the sentences with core knowledge violations than the skeptics did as indicated by the N400.

  14. Detection of circulating microparticles by flow cytometry: influence of centrifugation, filtration of buffer, and freezing

    PubMed Central

    Dey-Hazra, Emily; Hertel, Barbara; Kirsch, Torsten; Woywodt, Alexander; Lovric, Svjetlana; Haller, Hermann; Haubitz, Marion; Erdbruegger, Uta

    2010-01-01

    The clinical importance of microparticles resulting from vesiculation of platelets and other blood cells is increasingly recognized, although no standardized method exists for their measurement. Only a few studies have examined the analytical and preanalytical steps and variables affecting microparticle detection. We focused our analysis on microparticle detection by flow cytometry. The goal of our study was to analyze the effects of different centrifugation protocols looking at different durations of high and low centrifugation speeds. We also analyzed the effect of filtration of buffer and long-term freezing on microparticle quantification, as well as the role of Annexin V in the detection of microparticles. Absolute and platelet-derived microparticles were 10- to 15-fold higher using initial lower centrifugation speeds at 1500 × g compared with protocols using centrifugation speeds at 5000 × g (P < 0.01). A clear separation between true events and background noise was only achieved using higher centrifugation speeds. Filtration of buffer with a 0.2 μm filter reduced a significant amount of background noise. Storing samples for microparticle detection at −80°C decreased microparticle levels at days 28, 42, and 56 (P < 0.05 for all comparisons with fresh samples). We believe that staining with Annexin V is necessary to distinguish true events from cell debris or precipitates. Buffers should be filtered and fresh samples should be analyzed, or storage periods will have to be standardized. Higher centrifugation speeds should be used to minimize contamination by smaller size platelets. PMID:21191433

  15. Mean-field behavior as a result of noisy local dynamics in self-organized criticality: Neuroscience implications

    NASA Astrophysics Data System (ADS)

    Moosavi, S. Amin; Montakhab, Afshin

    2014-05-01

    Motivated by recent experiments in neuroscience which indicate that neuronal avalanches exhibit scale invariant behavior similar to self-organized critical systems, we study the role of noisy (nonconservative) local dynamics on the critical behavior of a sandpile model which can be taken to mimic the dynamics of neuronal avalanches. We find that despite the fact that noise breaks the strict local conservation required to attain criticality, our system exhibits true criticality for a wide range of noise in various dimensions, given that conservation is respected on the average. Although the system remains critical, exhibiting finite-size scaling, the value of critical exponents change depending on the intensity of local noise. Interestingly, for a sufficiently strong noise level, the critical exponents approach and saturate at their mean-field values, consistent with empirical measurements of neuronal avalanches. This is confirmed for both two and three dimensional models. However, the addition of noise does not affect the exponents at the upper critical dimension (D =4). In addition to an extensive finite-size scaling analysis of our systems, we also employ a useful time-series analysis method to establish true criticality of noisy systems. Finally, we discuss the implications of our work in neuroscience as well as some implications for the general phenomena of criticality in nonequilibrium systems.

  16. RF beam transmission of x-band PAA system utilizing large-area, polymer-based true-time-delay module developed using imprinting and inkjet printing

    NASA Astrophysics Data System (ADS)

    Pan, Zeyu; Subbaraman, Harish; Zhang, Cheng; Li, Qiaochu; Xu, Xiaochuan; Chen, Xiangning; Zhang, Xingyu; Zou, Yi; Panday, Ashwin; Guo, L. Jay; Chen, Ray T.

    2016-02-01

    Phased-array antenna (PAA) technology plays a significant role in modern day radar and communication networks. Truetime- delay (TTD) enabled beam steering networks provide several advantages over their electronic counterparts, including squint-free beam steering, low RF loss, immunity to electromagnetic interference (EMI), and large bandwidth control of PAAs. Chip-scale and integrated TTD modules promise a miniaturized, light-weight system; however, the modules are still rigid and they require complex packaging solutions. Moreover, the total achievable time delay is still restricted by the wafer size. In this work, we propose a light-weight and large-area, true-time-delay beamforming network that can be fabricated on light-weight and flexible/rigid surfaces utilizing low-cost "printing" techniques. In order to prove the feasibility of the approach, a 2-bit thermo-optic polymer TTD network is developed using a combination of imprinting and ink-jet printing. RF beam steering of a 1×4 X-band PAA up to 60° is demonstrated. The development of such active components on large area, light-weight, and low-cost substrates promises significant improvement in size, weight, and power (SWaP) requirements over the state-of-the-art.

  17. ARCHITECTURE AND DYNAMICS OF KEPLER'S CANDIDATE MULTIPLE TRANSITING PLANET SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lissauer, Jack J.; Jenkins, Jon M.; Borucki, William J.

    About one-third of the {approx}1200 transiting planet candidates detected in the first four months of Kepler data are members of multiple candidate systems. There are 115 target stars with two candidate transiting planets, 45 with three, 8 with four, and 1 each with five and six. We characterize the dynamical properties of these candidate multi-planet systems. The distribution of observed period ratios shows that the vast majority of candidate pairs are neither in nor near low-order mean-motion resonances. Nonetheless, there are small but statistically significant excesses of candidate pairs both in resonance and spaced slightly too far apart to bemore » in resonance, particularly near the 2:1 resonance. We find that virtually all candidate systems are stable, as tested by numerical integrations that assume a nominal mass-radius relationship. Several considerations strongly suggest that the vast majority of these multi-candidate systems are true planetary systems. Using the observed multiplicity frequencies, we find that a single population of planetary systems that matches the higher multiplicities underpredicts the number of singly transiting systems. We provide constraints on the true multiplicity and mutual inclination distribution of the multi-candidate systems, revealing a population of systems with multiple super-Earth-size and Neptune-size planets with low to moderate mutual inclinations.« less

  18. The dynamic interplay between perceived true self-knowledge and decision satisfaction.

    PubMed

    Schlegel, Rebecca J; Hicks, Joshua A; Davis, William E; Hirsch, Kelly A; Smith, Christina M

    2013-03-01

    The present research used multiple methods to examine the hypothesis that perceived true self-knowledge and decision satisfaction are inextricably linked together by a widely held "true-self-as-guide" lay theory of decision making. Consistent with this proposition, Study 1 found that participants rated using the true self as a guide as more important for achieving personal satisfaction than a variety of other potential decision-making strategies. After establishing the prevalence of this lay theory, the remaining studies then focused on examining the proposed consequent relationship between perceived true self-knowledge and decision satisfaction. Consistent with hypotheses, 2 cross-sectional correlational studies (Studies 2 and 3) found a positive relationship between perceived true self-knowledge and decision satisfaction for different types of major decisions. Study 4 used daily diary methods to demonstrate that fluctuations in perceived true self-knowledge reliably covary with fluctuations in decision satisfaction. Finally, 2 studies directly examined the causal direction of this relationship through experimental manipulation and revealed that the relationship is truly bidirectional. More specifically, Study 5 showed that manipulating perceived knowledge of the true self (but not other self-concepts) directly affects decision satisfaction. Study 6 showed that this effect also works in reverse by manipulating feelings of decision satisfaction, which directly affected perceived knowledge of the true self (but not other self-concepts). Taken together, these studies suggest that people believe the true self should be used as a guide when making major life decisions and that this belief has observable consequences for the self and decision making. PsycINFO Database Record (c) 2013 APA, all rights reserved

  19. Combining techniques for screening and evaluating interaction terms on high-dimensional time-to-event data.

    PubMed

    Sariyar, Murat; Hoffmann, Isabell; Binder, Harald

    2014-02-26

    Molecular data, e.g. arising from microarray technology, is often used for predicting survival probabilities of patients. For multivariate risk prediction models on such high-dimensional data, there are established techniques that combine parameter estimation and variable selection. One big challenge is to incorporate interactions into such prediction models. In this feasibility study, we present building blocks for evaluating and incorporating interactions terms in high-dimensional time-to-event settings, especially for settings in which it is computationally too expensive to check all possible interactions. We use a boosting technique for estimation of effects and the following building blocks for pre-selecting interactions: (1) resampling, (2) random forests and (3) orthogonalization as a data pre-processing step. In a simulation study, the strategy that uses all building blocks is able to detect true main effects and interactions with high sensitivity in different kinds of scenarios. The main challenge are interactions composed of variables that do not represent main effects, but our findings are also promising in this regard. Results on real world data illustrate that effect sizes of interactions frequently may not be large enough to improve prediction performance, even though the interactions are potentially of biological relevance. Screening interactions through random forests is feasible and useful, when one is interested in finding relevant two-way interactions. The other building blocks also contribute considerably to an enhanced pre-selection of interactions. We determined the limits of interaction detection in terms of necessary effect sizes. Our study emphasizes the importance of making full use of existing methods in addition to establishing new ones.

  20. Effects of enzyme supplementation on the nutrient, amino acid, and energy utilization efficiency of citrus pulp and hawthorn pulp in Linwu ducks.

    PubMed

    Zhang, Xu; Li, Haobang; Jiang, Guitao; Wang, Xiangrong; Huang, Xuan; Li, Chuang; Wu, Duanqin; Dai, Qiuzhong

    2018-04-11

    The objective of this study was to evaluate the effects of enzyme supplementation on the nutrient, amino acid, and energy utilization efficiency of citrus pulp and hawthorn pulp as unusual feedstuffs in Linwu ducks. Forty ducks were assigned to each treatment group and fed diets with or without complex enzyme supplementation. All birds received the same quantity of raw material (60 g) via the force-feeding procedure. With the exception of leucine and phenylalanine, amino acid concentrations in hawthorn pulp were twice those in citrus pulp. Enzyme supplementation significantly increased apparent dry matter digestibility (ADM) of citrus pulp (P < 0.05), but had no significant effects (P > 0.05) on the apparent and true utilization rates of other nutrients, apparent metabolizable energy (AME), or true metabolizable energy (TME), from citrus pulp and hawthorn pulp by Linwu ducks. However, enzyme supplementation significantly increased (P < 0.05) apparent gross energy, true gross energy, AME, and TME of hawthorn pulp for Linwu ducks. There were no differences in the apparent and true utilization rates of amino acids from citrus pulp (P > 0.56) between the groups, with the exception of arginine (P < 0.05). There was an increasing trend in the apparent and true utilization rates of alanine (P = 0.06) and tyrosine (P = 0.074) from citrus pulp with enzyme supplementation. The apparent and true utilization rates of threonine in hawthorn pulp were increased significantly (P < 0.05) following enzyme supplementation. The addition of exogenous enzymes improved the forage quality of citrus pulp and hawthorn pulp, which represent potential feed resources for husbandry production.

Top